Andrzej Rusiecki
Wrocław University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Andrzej Rusiecki.
international conference on artificial intelligence and soft computing | 2006
Andrzej Rusiecki
Training data containing outliers are often a problem for supervised neural networks learning methods that may not always come up with acceptable performance. In this paper a new, robust to outliers learning algorithm, employing the concept of initial data analysis by the MCD (minimum covariance determinant) estimator, is proposed. Results of implementation and simulation of nets trained with the new algorithm and the traditional backpropagation (BP) algorithm and robust Lmls are presented and compared. The better performance and robustness against outliers for the new method are demonstrated.
Neural Processing Letters | 2012
Andrzej Rusiecki
Outliers and gross errors in training data sets can seriously deteriorate the performance of traditional supervised feedforward neural networks learning algorithms. This is why several learning methods, to some extent robust to outliers, have been proposed. In this paper we present a new robust learning algorithm based on the iterative Least Median of Squares, that outperforms some existing solutions in its accuracy or speed. We demonstrate how to minimise new non-differentiable performance function by a deterministic approximate method. Results of simulations and comparison with other learning methods are demonstrated. Improved robustness of our novel algorithm, for data sets with varying degrees of outliers, is shown.
international conference on artificial intelligence and soft computing | 2014
Andrzej Rusiecki; Mirosław Kordos; Tomasz Kamiński; Krzysztof Greń
This paper discusses approaches to noise-resistant training of MLP neural networks. We present various aspects of the issue and the ways of obtaining that goal by using two groups of approaches and combinations of them. The first group is based on a different processing of each vector depending of the likelihood of the vector being an outlier. The likelihood is determined by instance selection and outlier detection. The second group is based on training MLP neural networks with non-differentiable robust objective functions. We evaluate the performance of particular methods with different level of noise in the data for regression problems.
Neurocomputing | 2013
Andrzej Rusiecki
Abstract Gross errors and outliers in the feedforward neural networks training sets may often corrupt the performance of traditional learning algorithms. Such algorithms try to fit networks to the contaminated data, so the resulting model may be far from the desired one. In this paper we propose new, robust to outliers, learning algorithm based on the concept of the least trimmed absolute value (LTA) estimator. The novel LTA algorithm is compared with traditional approach and other robust learning methods. Experimental results, presented in this article, demonstrate improved performance of the proposed training framework, especially for contaminated training data sets.
soft computing | 2010
Andrzej Rusiecki
Robust neural network learning algorithms are often applied to deal with the problem of gross errors and outliers. Unfortunately, such methods suffer from high computational complexity, which makes them ineffective. In this paper, we propose a new robust learning algorithm based on the LMLS (Least Mean Log Squares) error criterion. It can be considered, as a good trade-off between robustness to outliers and learning efficiency. As it was experimentally demonstrated, the novel method is not only faster but also more robust than the LMLS algorithm. Results of implementation and simulation of nets trained with the new algorithm, the traditional backpropagation (BP) algorithm and robust LMLS method are presented and compared.
International Conference on Theory and Practice of Natural Computing | 2013
Mirosław Kordos; Andrzej Rusiecki
In this paper we examine several methods for improving the performance of MLP neural networks by eliminating the influence of outliers and compare them experimentally on several classification and regression tasks. The examined method include: pre-training outlier elimination, use of different error measures during network training, replacing the weighted input sum with weighted median in the neuron input functions and various combinations of them. We show how these methods influence the network prediction. Based on the experimental results, we also present a novel hybrid approach improving the network performance.
international conference on artificial intelligence and soft computing | 2012
Andrzej Rusiecki
In the on-line data processing it is important to detect a novelty as soon as it appears, because it may be a consequence of gross errors or sudden change in the analysed system. In this paper we present a framework of novelty detection, based on the robust neural network. To detect novel patterns we compare responses of two autoregressive neural networks. One of them is trained with a robust learning algorithm designed to remove the influence of outliers, while the other uses simple training, based on the least squares error criterion. We present also a simple and easy to use approach that adapts this technique to data streams. Experiments conducted on data containing novelty and outliers have shown promising performance of the new method, applied to analyse temporal sequences.
Schedae Informaticae | 2015
Andrzej Rusiecki; Miros law Kordos
Deep learning is a field of research attracting nowadays much atten- tion, mainly because deep architectures help in obtaining outstanding results on many vision, speech and natural language processing - related tasks. To make deep learning eective, very often an unsupervised pretraining phase is applied. In this article, we present experimental study evaluating usefulness of such ap- proach, testing on several benchmarks and dierent percentages of labeled data, how Contrastive Divergence (CD), one of the most popular pretraining methods, influences network generalization.
intelligent data engineering and automated learning | 2014
Mirosław Kordos; Andrzej Rusiecki; Tomasz Kamiński; Krzysztof Greń
The advantages of Variable Step Search algorithm - a simple local search-based method of MLP training is that it does not require differentiable error functions, has better convergence properties than backpropagation and lower memory requirements and computational cost than global optimization and second order methods. However, in some applications, the issue of training time reduction becomes very important. In this paper we evaluate several approaches to achieve this reduction.
international conference on artificial intelligence and soft computing | 2013
Andrzej Rusiecki
In this paper, we present a preliminary experimental study of the generalization abilities of feedforward neural networks with median neuron input function (MIF). In these networks, proposed in our previous work, the signals fed to a neuron are not summed but a median of input signals is calculated. The MIF networks were designed to be fault tolerant but we expect them to have also improved generalization ability. Results of first experimental simulations are presented and described in this article. Potentially improved performance of the MIF networks is demonstrated.