Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sarunas Raudys is active.

Publication


Featured researches published by Sarunas Raudys.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1991

Small sample size effects in statistical pattern recognition: recommendations for practitioners

Sarunas Raudys; Anil K. Jain

The effects of sample size on feature selection and error estimation for several types of classifiers are discussed. The focus is on the two-class problem. Classifier design in the context of small design sample size is explored. The estimation of error rates under small test sample size is given. Sample size effects in feature selection are discussed. Recommendations for the choice of learning and test sample sizes are given. In addition to surveying prior work in this area, an emphasis is placed on giving practical advice to designers and users of statistical pattern recognition systems. >


Neurocomputing | 1996

Variable selection with neural networks

Tautvydas Cibas; Françoise Fogelman Soulie; Patrick Gallinari; Sarunas Raudys

Abstract In this paper, we present 3 different neural network-based methods to perform variable selection . OCD — Optimal Cell Damage — is a pruning method, which evaluates the usefulness of a variable and prunes the least useful ones (it is related to the Optimal Brain Damage method of Le Cun et al.). Regularization theory proposes to constrain estimators by adding a term to the cost function used to train a neural network. In the Bayesian framework, this additional term can be interpreted as the log prior to the weights distribution. We propose to use two priors (a Gaussian and a Gaussian mixture) and show that this regularization approach allows to select efficient subsets of variables. Our methods are compared to conventional statistical selection procedures and are shown to significantly improve on that.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1997

On dimensionality, sample size, and classification error of nonparametric linear classification algorithms

Sarunas Raudys

This paper compares two nonparametric linear classification algorithms


international conference on artificial neural networks | 1994

Variable Selection with Optimal Cell Damage

Tautvydas Cibas; Françroise Fogelman Soulié; Patrick Gallinari; Sarunas Raudys

the zero empirical error classifier and the maximum margin classifier - with parametric linear classifiers designed to classify multivariate Gaussian populations. Formulae and a table for the mean expected probability of misclassification MEP/sub N/ are presented. They show that the classification error is mainly determined by N/p, a learning-set size/dimensionality ratio. However, the influences of learning-set size on the generalization error of parametric and nonparametric linear classifiers are quite different. Under certain conditions the nonparametric approach allows us to obtain reliable rules, even in cases where the number of features is larger than the number of training vectors.


Neural Networks | 2000

How good are support vector machines

Sarunas Raudys

Neural Networks -NN- have been used in a large variety of real-world applications. In those, one could measure a potentially large number N of variables Xi; probably not all Xi are equally informative: if one could select n« N “best” variables Xi, then one could reduce the amount of data to gather and process; hence reduce costs. Variable selection is thus an important issue in Pattern Recognition and Regression. It is also a complex problem; one needs a criterion to measure the value of a subset of variables and that value will of course depend on the predictor or classifier further used. Conventional variable selection techniques are based upon statistical or heuristics tools [Fukunaga, 90]: the major difficulty comes from the intrinsic combinatorics of the problem. In this paper we show how to use NNs for variable selection with a criterion based upon the evaluation of a variable usefulness. Various methods have been proposed to assess the value of a weight (e.g. saliency [Le Cun et al. 90] in the Optimal Brain-Damage -OBD- procedure): along similar ideas, we derive a method, called Optimal Cell Damage -OCD-, which evaluates the usefulness of input variables in a Multi-Layer Network and prunes the least useful. Variable selection is thus achieved during training of the classifier, ensuring that the selected set of variables matches the classifier complexity. Variable selection is thus viewed here as an extension of weight pruning. One can also use a regularization approach to variable selection, which we will discuss elsewhere [Cibas et al., 94]. We illustrate our method on two relatively small problems: prediction of a synthetic time series and classification of waveforms [Breiman et al., 84], representative of relatively hard problems.


multiple classifier systems | 2002

An Experimental Comparison of Fixed and Trained Fusion Rules for Crisp Classifier Outputs

Fabio Roli; Sarunas Raudys; Gian Luca Marcialis

Support vector (SV) machines are useful tools to classify populations characterized by abrupt decreases in density functions. At least for one class of Gaussian data model the SV classifier is not an optimal one according to a mean generalization error criterion. In real world problems, we have neither Gaussian populations nor data with sharp linear boundaries. Thus, the SV (maximal margin) classifiers can lose against other methods where more than a fixed number of supporting vectors contribute in determining the final weights of the classification and prediction rules. A good alternative to the linear SV machine is a specially trained and optimally stopped SLP in a transformed feature space obtained after decorrelating and scaling the multivariate data.


Lecture Notes in Computer Science | 2006

Feature over-selection

Sarunas Raudys

At present, fixed rules for classifier combination are the most used and widely investigated ones, while the study and application of trained rules has received much less attention. Therefore, pros and cons of fixed and trained rules are only partially known even if one focuses on crisp classifier outputs. In this paper, we report the results of an experimental comparison of well-known fixed and trained rules for crisp classifier outputs. Reported experiments allow one draw some preliminary conclusions about comparative advantages of fixed and trained fusion rules.


IEEE Transactions on Neural Networks | 2013

Portfolio of Automated Trading Systems: Complexity and Learning Set Size Issues

Sarunas Raudys

We propose probabilistic framework for analysis of inaccuracies due to feature selection (FS) when flawed estimates of performance of feature subsets are utilized. The approach is based on analysis of random search FS procedure and postulation that joint distribution of true and estimated classification errors is known a priori. We derive expected values for the FS bias, a difference between actual classification error after FS and classification error if ideal FS is performed according to exact estimates. The increase in true classification error due to inaccurate FS is comparable or even exceeds a training bias, a difference between generalization and Bayes errors. We have shown that there exists overfitting phenomenon in feature selection, entitled in this paper as feature over-selection. The effects of feature over-selection could be reduced if FS would be performed on basis of positional statistics. Theoretical results are supported by experiments carried out on simulated Gaussian data, as well as on high dimensional microarray gene expression data.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2001

First-order tree-type dependence between variables and classification performance

Sarunas Raudys; Ausra Saudargiene

In this paper, we consider using profit/loss histories of multiple automated trading systems (ATSs) as N input variables in portfolio management. By means of multivariate statistical analysis and simulation studies, we analyze the influences of sample size (L) and input dimensionality on the accuracy of determining the portfolio weights. We find that degradation in portfolio performance due to inexact estimation of N means and N(N - 1)/2 correlations is proportional to N/L; however, estimation of N variances does not worsen the result. To reduce unhelpful sample size/dimensionality effects, we perform a clustering of N time series and split them into a small number of blocks. Each block is composed of mutually correlated ATSs. It generates an expert trading agent based on a nontrainable 1/N portfolio rule. To increase the diversity of the expert agents, we use training sets of different lengths for clustering. In the output of the portfolio management system, the regularized mean-variance framework-based fusion agent is developed in each walk-forward step of an out-of-sample portfolio validation experiment. Experiments with the real financial data (2003-2012) confirm the effectiveness of the suggested approach.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2010

Pairwise Costs in Multiclass Perceptrons

Sarunas Raudys; Aistis Raudys

Structuralization of the covariance matrix reduces the number of parameters to be estimated from the training data and does not affect an increase in the generalization error asymptotically as both the number of dimensions and training sample size grow. A method to benefit from approximately correct assumptions about the first order tree dependence between components of the feature vector is proposed. We use a structured estimate of the covariance matrix to decorrelate and scale the data and to train a single-layer perceptron in the transformed feature space. We show that training the perceptron can reduce negative effects of inexact a priori information. Experiments performed with 13 artificial and 10 real world data sets show that the first-order tree-type dependence model is the most preferable one out of two dozen of the covariance matrix structures investigated.

Collaboration


Dive into the Sarunas Raudys's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alvydas Pumputis

Mykolas Romeris University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge