Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sandro Ridella is active.

Publication


Featured researches published by Sandro Ridella.


ACM Transactions on Mathematical Software | 1987

Minimizing multimodal functions of continuous variables with the “simulated annealing” algorithm Corrigenda for this article is available here

Angelo Corana; M. Marchesi; Claudio Martini; Sandro Ridella

A new global optimization algorithm for functions of continuous variables is presented, derived from the “Simulated Annealing” algorithm recently introduced in combinatorial optimization. The algorithm is essentially an iterative random search procedure with adaptive moves along the coordinate directions. It permits uphill moves under the control of a probabilistic criterion, thus tending to avoid the first local minima encountered. The algorithm has been tested against the Nelder and Mead simplex method and against a version of Adaptive Random Search. The test functions were Rosenbrock valleys and multiminima functions in 2,4, and 10 dimensions. The new method proved to be more reliable than the others, being always able to find the optimum, or at least a point very close to it. It is quite costly in term of function evaluations, but its cost can be predicted in advance, depending only slightly on the starting point.


IEEE Transactions on Neural Networks | 2003

A digital architecture for support vector machines: theory, algorithm, and FPGA implementation

Davide Anguita; Andrea Boni; Sandro Ridella

In this paper, we propose a digital architecture for support vector machine (SVM) learning and discuss its implementation on a field programmable gate array (FPGA). We analyze briefly the quantization effects on the performance of the SVM in classification problems to show its robustness, in the feedforward phase, respect to fixed-point math implementations; then, we address the problem of SVM learning. The architecture described here makes use of a new algorithm for SVM learning which is less sensitive to quantization errors respect to the solution appeared so far in the literature. The algorithm is composed of two parts: the first one exploits a recurrent network for finding the parameters of the SVM; the second one uses a bisection process for computing the threshold. The architecture implementing the algorithm is described in detail and mapped on a real current-generation FPGA (Xilinx Virtex II). Its effectiveness is then tested on a channel equalization problem, where real-time performances are of paramount importance.


IEEE Transactions on Neural Networks | 1997

Circular backpropagation networks for classification

Sandro Ridella; Stefano Rovetta; Rodolfo Zunino

The class of mapping networks is a general family of tools to perform a wide variety of tasks. This paper presents a standardized, uniform representation for this class of networks, and introduces a simple modification of the multilayer perceptron with interesting practical properties, especially well suited to cope with pattern classification tasks. The proposed model unifies the two main representation paradigms found in the class of mapping networks for classification, namely, the surface-based and the prototype-based schemes, while retaining the advantage of being trainable by backpropagation. The enhancement in the representation properties and the generalization performance are assessed through results about the worst-case requirement in terms of hidden units and about the Vapnik-Chervonenkis dimension and cover capacity. The theoretical properties of the network also suggest that the proposed modification to the multilayer perceptron is in many senses optimal. A number of experimental verifications also confirm theoretical results about the models increased performances, as compared with the multilayer perceptron and the Gaussian radial basis functions network.


IEEE Transactions on Neural Networks | 1992

Statistically controlled activation weight initialization (SCAWI)

Gian Paolo Drago; Sandro Ridella

An optimum weight initialization which strongly improves the performance of the back propagation (BP) algorithm is suggested. By statistical analysis, the scale factor, R (which is proportional to the maximum magnitude of the weights), is obtained as a function of the paralyzed neuron percentage (PNP). Also, by computer simulation, the performances on the convergence speed have been related to PNP. An optimum range for R is shown to exist in order to minimize the time needed to reach the minimum of the cost function. Normalization factors are properly defined, which leads to a distribution of the activations independent of the neurons, and to a single nondimensional quantity, R, the value of which can be quickly found by computer simulation.


IEEE Transactions on Neural Networks | 2012

In-Sample and Out-of-Sample Model Selection and Error Estimation for Support Vector Machines

Davide Anguita; Alessandro Ghio; Luca Oneto; Sandro Ridella

In-sample approaches to model selection and error estimation of support vector machines (SVMs) are not as widespread as out-of-sample methods, where part of the data is removed from the training set for validation and testing purposes, mainly because their practical application is not straightforward and the latter provide, in many cases, satisfactory results. In this paper, we survey some recent and not-so-recent results of the data-dependent structural risk minimization framework and propose a proper reformulation of the SVM learning algorithm, so that the in-sample approach can be effectively applied. The experiments, performed both on simulated and real-world datasets, show that our in-sample approach can be favorably compared to out-of-sample methods, especially in cases where the latter ones provide questionable results. In particular, when the number of samples is small compared to their dimensionality, like in classification of microarray data, our proposal can outperform conventional out-of-sample approaches such as the cross validation, the leave-one-out, or the Bootstrap methods.


IEEE Transactions on Neural Networks | 2006

Feed-Forward Support Vector Machine Without Multipliers

Davide Anguita; Stefano Pischiutta; Sandro Ridella; Dario Sterpi

In this letter, we propose a coordinate rotation digital computer (CORDIC)-like algorithm for computing the feed-forward phase of a support vector machine (SVM) in fixed-point arithmetic, using only shift and add operations and avoiding resource-consuming multiplications. This result is obtained thanks to a hardware-friendly kernel, which greatly simplifies the SVM feed-forward phase computation and, at the same time, maintains good classification performance respect to the conventional Gaussian kernel


Neurocomputing | 2003

Hyperparameter design criteria for support vector classifiers

Davide Anguita; Sandro Ridella; Fabio Rivieccio; Rodolfo Zunino

Abstract The design of a support vector machine (SVM) consists in tuning a set of hyperparameter quantities, and requires an accurate prediction of the classifiers generalization performance. The paper describes the application of the maximal-discrepancy criterion to the hyperparameter-setting process, and points out the advantages of such an approach over existing theoretical frameworks. The resulting theoretical predictions are then compared with the k -fold cross-validation empirical method, which probably is the current best-performing approach to the SVM design problem. Experimental results on a wide range of real-world testbeds prove out that the features of the maximal-discrepancy method can notably narrow the gap that so far has separated theoretical and empirical estimates of a classifiers generalization error.


Neurocomputing | 2008

A support vector machine with integer parameters

Davide Anguita; Alessandro Ghio; Stefano Pischiutta; Sandro Ridella

We describe here a method for building a support vector machine (SVM) with integer parameters. Our method is based on a branch-and-bound procedure, derived from modern mixed integer quadratic programming solvers, and is useful for implementing the feed-forward phase of the SVM in fixed-point arithmetic. This allows the implementation of the SVM algorithm on resource-limited hardware like, for example, computing devices used for building sensor networks, where floating-point units are rarely available. The experimental results on well-known benchmarking data sets and a real-world people-detection application show the effectiveness of our approach.


IEEE Transactions on Instrumentation and Measurement | 1976

Launcher and microstrip characterization

Bruno Bianco; Mauro Parodi; Sandro Ridella; Franco Selvaggi

A method for identifying scattering parameters of launchers and uniform microstrips is presented. It is shown that 8 complex measurements (magnitude and phase) on two microstrips which are different only in length, inserted between two launchers, can give, with suitable algebraic treatment, the S-parameters of either the microstrips and the launchers. This technique is promising for deembedding active devices as well for microstrip discontinuities.


IEEE Transactions on Neural Networks | 2001

K-winner machines for pattern classification

Sandro Ridella; Stefano Rovetta; Rodolfo Zunino

The paper describes the K-winner machine (KWM) model for classification. KWM training uses unsupervised vector quantization and subsequent calibration to label data-space partitions. A K-winner classifier seeks the largest set of best-matching prototypes agreeing on a test pattern, and provides a local-level measure of confidence. A theoretical analysis characterizes the growth function of a K-winner classifier, and the result leads to tight bounds to generalization performance. The method proves suitable for high-dimensional multiclass problems with large amounts of data. Experimental results on both a synthetic and a real domain (NIST handwritten numerals) confirm the approach effectiveness and the consistency of the theoretical framework.

Collaboration


Dive into the Sandro Ridella's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aldo Casaleggio

National Research Council

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge