Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adam Krzyzak is active.

Publication


Featured researches published by Adam Krzyzak.


systems man and cybernetics | 1992

Methods of combining multiple classifiers and their applications to handwriting recognition

Lei Xu; Adam Krzyzak; Ching Y. Suen

Possible solutions to the problem of combining classifiers can be divided into three categories according to the levels of information available from the various classifiers. Four approaches based on different methodologies are proposed for solving this problem. One is suitable for combining individual classifiers such as Bayesian, k-nearest-neighbor, and various distance classifiers. The other three could be used for combining any kind of individual classifiers. On applying these methods to combine several classifiers for recognizing totally unconstrained handwritten numerals, the experimental results show that the performance of individual classifiers can be improved significantly. For example, on the US zipcode database, 98.9% recognition with 0.90% substitution and 0.2% rejection can be obtained, as well as high reliability with 95% recognition, 0% substitution, and 5% rejection. >


IEEE Transactions on Neural Networks | 1993

Rival penalized competitive learning for clustering analysis, RBF net, and curve detection

Lei Xu; Adam Krzyzak; Erkki Oja

It is shown that frequency sensitive competitive learning (FSCL), one version of the recently improved competitive learning (CL) algorithms, significantly deteriorates in performance when the number of units is inappropriately selected. An algorithm called rival penalized competitive learning (RPCL) is proposed. In this algorithm, not only is the winner unit modified to adapt to the input for each input, but its rival (the 2nd winner) is delearned by a smaller learning rate. RPCL can be regarded as an unsupervised extension of Kohonens supervised LVQ2. RPCL has the ability to automatically allocate an appropriate number of units for an input data set. The experimental results show that RPCL outperforms FSCL when used for unsupervised classification, for training a radial basis function (RBF) network, and for curve detection in digital images.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2005

Fast SVM training algorithm with decomposition on very large data sets

Jian-xiong Dong; Adam Krzyzak; Ching Y. Suen

Training a support vector machine on a data set of huge size with thousands of classes is a challenging problem. This paper proposes an efficient algorithm to solve this problem. The key idea is to introduce a parallel optimization step to quickly remove most of the nonsupport vectors, where block diagonal matrices are used to approximate the original kernel matrix so that the original problem can be split into hundreds of subproblems which can be solved more efficiently. In addition, some effective strategies such as kernel caching and efficient computation of kernel matrix are integrated to speed up the training process. Our analysis of the proposed algorithm shows that its time complexity grows linearly with the number of classes and size of the data set. In the experiments, many appealing properties of the proposed algorithm have been investigated and the results show that the proposed algorithm has a much better scaling capability than Libsvm, SVM/sup light/, and SVMTorch. Moreover, the good generalization performances on several large databases have also been achieved.


Neural Networks | 1994

On radial basis function nets and kernel regression: statistical consistency, convergence rates, and receptive field size

Lei Xu; Adam Krzyzak; Alan L. Yuille

Abstract Useful connections between radial basis function (RBF) nets and kernel regression estimators (KRE) are established. By using existing theoretical results obtained for KRE as tools, we obtain a number of interesting theoretical results for RBF nets. Upper bounds are presented for convergence rates of the approximation error with respect to the number of hidden units. The existence of a consistent estimator for RBF nets is proven constructively. Upper bounds are also provided for the pointwise and L2 convergence rates of the best consistent estimator for RBF nets as the numbers of both the samples and the hidden units tend to infinity. Moreover, the problem of selecting the appropriate size of the receptive field of the radial basis function is theoretically investigated and the way this selection is influenced by various factors is elaborated. In addition, some results are also given for the convergence of the empirical error obtained by the least squares estimator for RBF nets.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1994

Robust estimation for range image segmentation and reconstruction

Xinming Yu; Tien D. Bui; Adam Krzyzak

This correspondence presents a segmentation and fitting method using a new robust estimation technique. We present a robust estimation method with high breakdown point which can tolerate more than 80% of outliers. The method randomly samples appropriate range image points in the current processing region and solves equations determined by these points for parameters of selected primitive type. From K samples, we choose one set of sample points that determines a best-fit equation for the largest homogeneous surface patch in the region. This choice is made by measuring a residual consensus (RESC), using a compressed histogram method which is effective at various noise levels. After we get the best-fit surface parameters, the surface patch can be segmented from the region and the process is repeated until no pixel left. The method segments the range image into planar and quadratic surfaces. The RESC method is a substantial improvement over the least median squares method by using histogram approach to inferring residual consensus. A genetic algorithm is also incorporated to accelerate the random search. >


IEEE Transactions on Neural Networks | 1998

Radial basis function networks and complexity regularization in function learning

Adam Krzyzak; Tamás Linder

In this paper we apply the method of complexity regularization to derive estimation bounds for nonlinear function estimation using a single hidden layer radial basis function network. Our approach differs from previous complexity regularization neural-network function learning schemes in that we operate with random covering numbers and l(1) metric entropy, making it possible to consider much broader families of activation functions, namely functions of bounded variation. Some constraints previously imposed on the network parameters are also eliminated this way. The network is trained by means of complexity regularization involving empirical risk minimization. Bounds on the expected risk in terms of the sample size are obtained for a large class of loss functions. Rates of convergence to the optimal loss are also derived.


international conference on document analysis and recognition | 1993

Segmentation of handwritten digits using contour features

Nick W. Strathy; Ching Y. Suen; Adam Krzyzak

A new method of separating touching unconstrained handwritten digits is proposed. A binary image containing a string of touching digits is scanned to give contour chains. The chains are analyzed and subdivided into four kinds of regions: valleys, mountains, holes, and open regions. Individual points of interest in the outer contour are then identified, e.g., points of high curvature. The separating path is assumed to pass between some pair of these significant contour points (SCPs). Nine features of the SCPs are measured and are used to sort the list of all possible pairings of SCPs. Preliminary results show that the correct cut is sorted within the first three choices in 89% of tests.<<ETX>>


International Journal of Applied Mathematics and Computer Science | 2008

Classification of Breast Cancer Malignancy Using Cytological Images of Fine Needle Aspiration Biopsies

Łukasz Jeleń; Thomas Fevens; Adam Krzyzak

Classification of Breast Cancer Malignancy Using Cytological Images of Fine Needle Aspiration Biopsies According to the World Health Organization (WHO), breast cancer (BC) is one of the most deadly cancers diagnosed among middle-aged women. Precise diagnosis and prognosis are crucial to reduce the high death rate. In this paper we present a framework for automatic malignancy grading of fine needle aspiration biopsy tissue. The malignancy grade is one of the most important factors taken into consideration during the prediction of cancer behavior after the treatment. Our framework is based on a classification using Support Vector Machines (SVM). The SVMs presented here are able to assign a malignancy grade based on preextracted features with the accuracy up to 94.24%. We also show that SVMs performed best out of four tested classifiers.


IEEE Transactions on Neural Networks | 1996

Nonparametric estimation and classification using radial basis function nets and empirical risk minimization

Adam Krzyzak; Tamás Linder; C. Lugosi

Studies convergence properties of radial basis function (RBF) networks for a large class of basis functions, and reviews the methods and results related to this topic. The authors obtain the network parameters through empirical risk minimization. The authors show the optimal nets to be consistent in the problem of nonlinear function approximation and in nonparametric classification. For the classification problem the authors consider two approaches: the selection of the RBF classifier via nonlinear function estimation and the direct method of minimizing the empirical error probability. The tools used in the analysis include distribution-free nonasymptotic probability inequalities and covering numbers for classes of functions.


International Journal of Systems Science | 1989

Identification of discrete Hammerstein systems by the Fourier series regression estimate

Adam Krzyzak

The identification of a single-input, single-output (SISO) discrete Hammerstein system is studied. Such a system consists of a non-linear memoryless subsystem followed by a dynamic, linear subsystem. The parameters of the dynamic, linear subsystem are identified by a correlation method and the Newton-Gauss method. The main results concern the identification of the non-linear, memoryless subsystem. No conditions are imposed on the functional form of the non-linear subsystem, recovering the non-linear using the Fourier series regression estimate. The density-free pointwise convergence Of the estimate is proved, that is.algorithm converges for all input densities The rate of pointwise convergence is obtained for smooth input densities and for non-linearities of Lipschitz type.Globle convergence and its rate are also studied for a large class of non-linearities and input densities

Collaboration


Dive into the Adam Krzyzak's collaboration.

Top Co-Authors

Avatar

Michael Kohler

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Harro Walk

University of Stuttgart

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

László Györfi

Budapest University of Technology and Economics

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lei Xu

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge