Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Giorgio Gnecco is active.

Publication


Featured researches published by Giorgio Gnecco.


Neurocomputing | 2015

An efficient Self-Organizing Active Contour model for image segmentation

Mohammed M. Abdelsamea; Giorgio Gnecco; Mohamed Medhat Gaber

Active Contour Models (ACMs) constitute a powerful energy-based minimization framework for image segmentation, based on the evolution of an active contour. Among ACMs, supervised ACMs are able to exploit the information extracted from supervised examples to guide the contour evolution. However, their applicability is limited by the accuracy of the probability models they use. As a consequence, effectiveness and efficiency of supervised ACMs are among their main real challenges, especially when handling images containing regions characterized by intensity inhomogeneity. In this paper, to deal with such kinds of images, we propose a new supervised ACM, named Self-Organizing Active Contour (SOAC) model, which combines a variational level set method (a specific kind of ACM) with the weights of the neurons of two Self-Organizing Maps (SOMs). Its main contribution is the development of a new ACM energy functional optimized in such a way that the topological structure of the underlying image intensity distribution is preserved - using the two SOMs - in a parallel-processing and local way. The model has a supervised component since training pixels associated with different regions are assigned to different SOMs. Experimental results show the superior efficiency and effectiveness of SOAC versus several existing ACMs.


Neural Computation | 2010

Regularization techniques and suboptimal solutions to optimization problems in learning from data

Giorgio Gnecco; Marcello Sanguineti

Various regularization techniques are investigated in supervised learning from data. Theoretical features of the associated optimization problems are studied, and sparse suboptimal solutions are searched for. Rates of approximate optimization are estimated for sequences of suboptimal solutions formed by linear combinations of n-tuples of computational units, and statistical learning bounds are derived. As hypothesis sets, reproducing kernel Hilbert spaces and their subsets are considered.


Neural Computation | 2015

Foundations of support constraint machines

Giorgio Gnecco; Marco Gori; Stefano Melacci; Marcello Sanguineti

The mathematical foundations of a new theory for the design of intelligent agents are presented. The proposed learning paradigm is centered around the concept of constraint, representing the interactions with the environment, and the parsimony principle. The classical regularization framework of kernel machines is naturally extended to the case in which the agents interact with a richer environment, where abstract granules of knowledge, compactly described by different linguistic formalisms, can be translated into the unified notion of constraint for defining the hypothesis set. Constrained variational calculus is exploited to derive general representation theorems that provide a description of the optimal body of the agent (i.e., the functional structure of the optimal solution to the learning problem), which is the basis for devising new learning algorithms. We show that regardless of the kind of constraints, the optimal body of the agent is a support constraint machine (SCM) based on representer theorems that extend classical results for kernel machines and provide new representations. In a sense, the expressiveness of constraints yields a semantic-based regularization theory, which strongly restricts the hypothesis set of classical regularization. Some guidelines to unify continuous and discrete computational mechanisms are given so as to accommodate in the same framework various kinds of stimuli, for example, supervised examples and logic predicates. The proposed view of learning from constraints incorporates classical learning from examples and extends naturally to the case in which the examples are subsets of the input space, which is related to learning propositional logic clauses.


Neural Computation | 2013

Learning with boundary conditions

Giorgio Gnecco; Marco Gori; Marcello Sanguineti

Kernel machines traditionally arise from an elegant formulation based on measuring the smoothness of the admissible solutions by the norm in the reproducing kernel Hilbert space (RKHS) generated by the chosen kernel. It was pointed out that they can be formulated in a related functional framework, in which the Green’s function of suitable differential operators is thought of as a kernel. In this letter, our own picture of this intriguing connection is given by emphasizing some relevant distinctions between these different ways of measuring the smoothness of admissible solutions. In particular, we show that for some kernels, there is no associated differential operator. The crucial relevance of boundary conditions is especially emphasized, which is in fact the truly distinguishing feature of the approach based on differential operators. We provide a general solution to the problem of learning from data and boundary conditions and illustrate the significant role played by boundary conditions with examples. It turns out that the degree of freedom that arises in the traditional formulation of kernel machines is indeed a limitation, which is partly overcome when incorporating the boundary conditions. This likely holds true in many real-world applications in which there is prior knowledge about the expected behavior of classifiers and regressors on the boundary.


Computational Management Science | 2009

The weight-decay technique in learning from data: an optimization point of view

Giorgio Gnecco; Marcello Sanguineti

The technique known as “weight decay” in the literature about learning from data is investigated using tools from regularization theory. Weight-decay regularization is compared with Tikhonov’s regularization of the learning problem and with a mixed regularized learning technique. The accuracies of suboptimal solutions to weight-decay learning are estimated for connectionistic models with a-priori fixed numbers of computational units.


IEEE Transactions on Information Theory | 2011

On a Variational Norm Tailored to Variable-Basis Approximation Schemes

Giorgio Gnecco; Marcello Sanguineti

A variational norm associated with sets of computational units and used in function approximation, learning from data, and infinite-dimensional optimization is investigated. For sets Gk obtained by varying a vector y of parameters in a fixed-structure computational unit K(-,y) (e.g., the set of Gaussians with free centers and widths), upper and lower bounds on the GK -variation norms of functions having certain integral representations are given, in terms of the £1-norms of the weighting functions in such representations. Families of functions for which the two norms are equal are described.


Journal of Optimization Theory and Applications | 2013

Dynamic Programming and Value-Function Approximation in Sequential Decision Problems: Error Analysis and Numerical Results

Mauro Gaggero; Giorgio Gnecco; Marcello Sanguineti

Value-function approximation is investigated for the solution via Dynamic Programming (DP) of continuous-state sequential N-stage decision problems, in which the reward to be maximized has an additive structure over a finite number of stages. Conditions that guarantee smoothness properties of the value function at each stage are derived. These properties are exploited to approximate such functions by means of certain nonlinear approximation schemes, which include splines of suitable order and Gaussian radial-basis networks with variable centers and widths. The accuracies of suboptimal solutions obtained by combining DP with these approximation tools are estimated. The results provide insights into the successful performances appeared in the literature about the use of value-function approximators in DP. The theoretical analysis is applied to a problem of optimal consumption, with simulation results illustrating the use of the proposed solution methodology. Numerical comparisons with classical linear approximators are presented.


Smart Materials and Structures | 2016

Optimal design of auxetic hexachiral metamaterials with local resonators

Andrea Bacigalupo; Marco Lepidi; Giorgio Gnecco; Luigi Gambarotta

A parametric beam lattice model is formulated to analyse the propagation properties of elastic in-plane waves in an auxetic material based on a hexachiral topology of the periodic cell, equipped with inertial local resonators. The Floquet-Bloch boundary conditions are imposed on a reduced order linear model in the only dynamically active degrees-offreedom. Since the resonators can be designed to open and shift band gaps, an optimal design, focused on the largest possible gap in the low-frequency range, is achieved by solving a maximization problem in the bounded space of the significant geometrical and mechanical parameters. A local optimized solution, for a the lowest pair of consecutive dispersion curves, is found by employing the globally convergent version of the Method of Moving asymptotes, combined with Monte Carlo and quasi-Monte Carlo multi-start techniques.


Neural Networks | 2011

Some comparisons of complexity in dictionary-based and linear computational models

Giorgio Gnecco; Vra Krková; Marcello Sanguineti

Neural networks provide a more flexible approximation of functions than traditional linear regression. In the latter, one can only adjust the coefficients in linear combinations of fixed sets of functions, such as orthogonal polynomials or Hermite functions, while for neural networks, one may also adjust the parameters of the functions which are being combined. However, some useful properties of linear approximators (such as uniqueness, homogeneity, and continuity of best approximation operators) are not satisfied by neural networks. Moreover, optimization of parameters in neural networks becomes more difficult than in linear regression. Experimental results suggest that these drawbacks of neural networks are offset by substantially lower model complexity, allowing accuracy of approximation even in high-dimensional cases. We give some theoretical results comparing requirements on model complexity for two types of approximators, the traditional linear ones and so called variable-basis types, which include neural networks, radial, and kernel models. We compare upper bounds on worst-case errors in variable-basis approximation with lower bounds on such errors for any linear approximator. Using methods from nonlinear approximation and integral representations tailored to computational units, we describe some cases where neural networks outperform any linear approximator.


Siam Journal on Optimization | 2012

Suboptimal Solutions to Team Optimization Problems with Stochastic Information Structure

Giorgio Gnecco; Marcello Sanguineti; Mauro Gaggero

Existence, uniqueness, and approximations of smooth solutions to team optimization problems with stochastic information structure are investigated. Suboptimal strategies made up of linear combinations of basis functions containing adjustable parameters are considered. Estimates of their accuracies are derived by combining properties of the unknown optimal strategies with tools from nonlinear approximation theory. The estimates are obtained for basis functions corresponding to sinusoids with variable frequencies and phases, Gaussians with variable centers and widths, and sigmoidal ridge functions. The theoretical results are applied to a problem of optimal production in a multidivisional firm, for which numerical simulations are presented.

Collaboration


Dive into the Giorgio Gnecco's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mauro Gaggero

National Research Council

View shared research outputs
Top Co-Authors

Avatar

Alberto Bemporad

IMT Institute for Advanced Studies Lucca

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge