Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marcello Sanguineti is active.

Publication


Featured researches published by Marcello Sanguineti.


Journal of Optimization Theory and Applications | 2002

Approximating networks and extended Ritz method for the solution of functional optimization problems

R. Zoppoli; Marcello Sanguineti; Thomas Parisini

Functional optimization problems can be solved analytically only if special assumptions are verified; otherwise, approximations are needed. The approximate method that we propose is based on two steps. First, the decision functions are constrained to take on the structure of linear combinations of basis functions containing free parameters to be optimized (hence, this step can be considered as an extension to the Ritz method, for which fixed basis functions are used). Then, the functional optimization problem can be approximated by nonlinear programming problems. Linear combinations of basis functions are called approximating networks when they benefit from suitable density properties. We term such networks nonlinear (linear) approximating networks if their basis functions contain (do not contain) free parameters. For certain classes of d-variable functions to be approximated, nonlinear approximating networks may require a number of parameters increasing moderately with d, whereas linear approximating networks may be ruled out by the curse of dimensionality. Since the cost functions of the resulting nonlinear programming problems include complex averaging operations, we minimize such functions by stochastic approximation algorithms. As important special cases, we consider stochastic optimal control and estimation problems. Numerical examples show the effectiveness of the method in solving optimization problems stated in high-dimensional settings, involving for instance several tens of state variables.


IEEE Transactions on Information Theory | 2002

Comparison of worst case errors in linear and neural network approximation

Vera Kurkova; Marcello Sanguineti

Sets of multivariable functions are described for which worst case errors in linear approximation are larger than those in approximation by neural networks. A theoretical framework for such a description is developed in the context of nonlinear approximation by fixed versus variable basis functions. Comparisons of approximation rates are formulated in terms of certain norms tailored to sets of basis functions. The results are applied to perceptron networks.


Siam Journal on Optimization | 2005

Error Estimates for Approximate Optimization by the Extended Ritz Method

Vera Kurkova; Marcello Sanguineti

An alternative to the classical Ritz method for approximate optimization is investigated. In the extended Ritz method, sets of admissible solutions are approximated by their intersections with sets of linear combinations of all n-tuples of functions from a given basis. This alternative scheme, called variable-basis approximation, includes functions computable by trigonometric polynomials with free frequencies, free-node splines, neural networks, and other nonlinear approximating families. Estimates of rates of approximate optimization by the extended Ritz method are derived. Upper bounds on rates of convergence of suboptimal solutions to the optimal one are expressed in terms of the degree n of variable-basis functions, the modulus of continuity of the functional to be minimized, the modulus of Tikhonov well-posedness of the problem, and certain norms tailored to the type of basis. The results are applied to convex best approximation and to kernel methods in machine learning.


IEEE Transactions on Information Theory | 2001

Bounds on rates of variable-basis and neural-network approximation

Vera Kurkova; Marcello Sanguineti

The tightness of bounds on rates of approximation by feedforward neural networks is investigated in a more general context of nonlinear approximation by variable-basis functions. Tight bounds on the worst case error in approximation by linear combinations of n elements of an orthonormal variable basis are derived.


Journal of Complexity | 2009

Complexity of Gaussian-radial-basis networks approximating smooth functions

Paul C. Kainen; Vra Krková; Marcello Sanguineti

Complexity of Gaussian-radial-basis-function networks, with varying widths, is investigated. Upper bounds on rates of decrease of approximation errors with increasing number of hidden units are derived. Bounds are in terms of norms measuring smoothness (Bessel and Sobolev norms) multiplied by explicitly given functions a(r,d) of the number of variables d and degree of smoothness r. Estimates are proven using suitable integral representations in the form of networks with continua of hidden units computing scaled Gaussians and translated Bessel potentials. Consequences on tractability of approximation by Gaussian-radial-basis function networks are discussed.


Journal of Complexity | 2005

Learning with generalization capability by kernal methods of bounded complexity

Věra Kuková; Marcello Sanguineti

Learning from data with generalization capability is studied in the framework of minimization of regularized empirical error functionals over nested families of hypothesis sets with increasing model complexity. For Tikhonovs regularization with kernel stabilizers, minimization over restricted hypothesis sets containing for a fixed integer n only linear combinations of all n-tuples of kernel functions is investigated. Upper bounds are derived on the rate of convergence of suboptimal solutions from such sets to the optimal solution achievable without restrictions on model complexity. The bounds are of the form 1/√n multiplied by a term that depends on the size of the sample of empirical data, the vector of output data, the Gram matrix of the kernel with respect to the input data, and the regularization parameter.


IEEE Transactions on Information Theory | 2008

Geometric Upper Bounds on Rates of Variable-Basis Approximation

Vera Kurkova; Marcello Sanguineti

In this paper, approximation by linear combinations of an increasing number n of computational units with adjustable parameters (such as perceptrons and radial basis functions) is investigated. Geometric upper bounds on rates of convergence of approximation errors are derived. The bounds depend on certain parameters specific for each function to be approximated. The results are illustrated by examples of values of such parameters in the case of approximation by linear combinations of orthonormal functions.


IEEE Transactions on Neural Networks | 2007

Design of Asymptotic Estimators: An Approach Based on Neural Networks and Nonlinear Programming

Angelo Alessandri; Cristiano Cervellera; Marcello Sanguineti

A methodology to design state estimators for a class of nonlinear continuous-time dynamic systems that is based on neural networks and nonlinear programming is proposed. The estimator has the structure of a Luenberger observer with a linear gain and a parameterized (in general, nonlinear) function, whose argument is an innovation term representing the difference between the current measurement and its prediction. The problem of the estimator design consists in finding the values of the gain and of the parameters that guarantee the asymptotic stability of the estimation error. Toward this end, if a neural network is used to take on this function, the parameters (i.e., the neural weights) are chosen, together with the gain, by constraining the derivative of a quadratic Lyapunov function for the estimation error to be negative definite on a given compact set. It is proved that it is sufficient to impose the negative definiteness of such a derivative only on a suitably dense grid of sampling points. The gain is determined by solving a Lyapunov equation. The neural weights are searched for via nonlinear programming by minimizing a cost penalizing grid-point constraints that are not satisfied. Techniques based on low-discrepancy sequences are applied to deal with a small number of sampling points, and, hence, to reduce the computational burden required to optimize the parameters. Numerical results are reported and comparisons with those obtained by the extended Kalman filter are made


IEEE Transactions on Information Theory | 2012

Dependence of Computational Models on Input Dimension: Tractability of Approximation and Optimization Tasks

Paul C. Kainen; Věra Kurková; Marcello Sanguineti

The role of input dimension <i>d</i> is studied in approximating, in various norms, target sets of <i>d</i>-variable functions using linear combinations of adjustable computational units. Results from the literature, which emphasize the number <i>n</i> of terms in the linear combination, are reformulated, and in some cases improved, with particular attention to dependence on <i>d</i> . For worst-case error, upper bounds are given in the factorized form ξ(<i>d</i>)κ(<i>n</i>) , where κ is nonincreasing (typically κ(<i>n</i>) ~ <i>n</i><sup>-1/2</sup>). Target sets of functions are described for which the function ξ is a polynomial. Some important cases are highlighted where ξ decreases to zero as <i>d</i> → ∞. For target functions, extent (e.g., the size of domains in R<i>d</i> where they are defined), scale (e.g., maximum norms of target functions), and smoothness (e.g., the order of square-integrable partial derivatives) may depend on <i>d</i> , and the influence of such dimension-dependent parameters on model complexity is considered. Results are applied to approximation and solution of optimization problems by neural networks with perceptron and Gaussian radial computational units.


Neural Computation | 2010

Regularization techniques and suboptimal solutions to optimization problems in learning from data

Giorgio Gnecco; Marcello Sanguineti

Various regularization techniques are investigated in supervised learning from data. Theoretical features of the associated optimization problems are studied, and sparse suboptimal solutions are searched for. Rates of approximate optimization are estimated for sequences of suboptimal solutions formed by linear combinations of n-tuples of computational units, and statistical learning bounds are derived. As hypothesis sets, reproducing kernel Hilbert spaces and their subsets are considered.

Collaboration


Dive into the Marcello Sanguineti's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A. Alessandri

National Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Věra Kůrková

Academy of Sciences of the Czech Republic

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge