Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stefano Melacci is active.

Publication


Featured researches published by Stefano Melacci.


Neural Computation | 2015

Foundations of support constraint machines

Giorgio Gnecco; Marco Gori; Stefano Melacci; Marcello Sanguineti

The mathematical foundations of a new theory for the design of intelligent agents are presented. The proposed learning paradigm is centered around the concept of constraint, representing the interactions with the environment, and the parsimony principle. The classical regularization framework of kernel machines is naturally extended to the case in which the agents interact with a richer environment, where abstract granules of knowledge, compactly described by different linguistic formalisms, can be translated into the unified notion of constraint for defining the hypothesis set. Constrained variational calculus is exploited to derive general representation theorems that provide a description of the optimal body of the agent (i.e., the functional structure of the optimal solution to the learning problem), which is the basis for devising new learning algorithms. We show that regardless of the kind of constraints, the optimal body of the agent is a support constraint machine (SCM) based on representer theorems that extend classical results for kernel machines and provide new representations. In a sense, the expressiveness of constraints yields a semantic-based regularization theory, which strongly restricts the hypothesis set of classical regularization. Some guidelines to unify continuous and discrete computational mechanisms are given so as to accommodate in the same framework various kinds of stimuli, for example, supervised examples and logic predicates. The proposed view of learning from constraints incorporates classical learning from examples and extends naturally to the case in which the examples are subsets of the input space, which is related to learning propositional logic clauses.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2013

Learning with Box Kernels

Stefano Melacci; Marco Gori

Supervised examples and prior knowledge on regions of the input space have been profitably integrated in kernel machines to improve the performance of classifiers in different real-world contexts. The proposed solutions, which rely on the unified supervision of points and sets, have been mostly based on specific optimization schemes in which, as usual, the kernel function operates on points only. In this paper, arguments from variational calculus are used to support the choice of a special class of kernels, referred to as box kernels, which emerges directly from the choice of the kernel function associated with a regularization operator. It is proven that there is no need to search for kernels to incorporate the structure deriving from the supervision of regions of the input space, because the optimal kernel arises as a consequence of the chosen regularization operator. Although most of the given results hold for sets, we focus attention on boxes, whose labeling is associated with their propositional description. Based on different assumptions, some representer theorems are given that dictate the structure of the solution in terms of box kernel expansion. Successful results are given for problems of medical diagnosis, image, and text categorization.


Pattern Analysis and Applications | 2010

A template-based approach to automatic face enhancement

Stefano Melacci; Lorenzo Sarti; Marco Maggini; Marco Gori

This paper presents Visual ENhancement of USers (VENUS), a system able to automatically enhance male and female frontal facial images exploiting a database of celebrities as reference patterns for attractiveness. Each face is represented by a set of landmark points that can be manually selected or automatically localized using active shape models. The faces can be compared remapping the landmarks by means of Catmull–Rom splines, a class of interpolating splines particularly useful to extract shape-based representations. Given the input image, its landmarks are compared against the known beauty templates and moved towards the K-nearest ones by 2D image warping. The VENUS performances have been evaluated by 20 volunteers on a set of images collected during the Festival of Creativity, held in Florence, Italy, on October 2007. The experiments show that the 73.9% of the beautified faces are more attractive than the original pictures.


IEEE Transactions on Neural Networks | 2012

Unsupervised Learning by Minimal Entropy Encoding

Stefano Melacci; Marco Gori

Following basic principles of information-theoretic learning, in this paper, we propose a novel approach to data clustering, referred to as minimal entropy encoding (MEE), which is based on a set of functions (features) projecting each input onto a minimum entropy configuration (code). Inspired by traditional parsimony principles, we seek solutions in reproducing kernel Hilbert spaces and then we prove that the encoding functions are expressed in terms of kernel expansion. In order to avoid trivial solutions, the developed features must be as different as possible by means of a soft constraint on the empirical estimation of the entropy associated with the encoding functions. This leads to an unconstrained optimization problem that can be efficiently solved by conjugate gradient. We also investigate an optimization strategy based on concave-convex algorithms. The relationships with maximum margin clustering are studied, showing that MEE overcomes some of its critical issues, such as the lack of a multiclass extension and the need to face problems with a large number of constraints. A massive evaluation on several benchmarks of the proposed approach shows improvements over state-of-the-art techniques, both in terms of accuracy and computational complexity.


IEEE Transactions on Neural Networks | 2013

Constraint Verification With Kernel Machines

Marco Gori; Stefano Melacci

Based on a recently proposed framework of learning from constraints using kernel-based representations, in this brief, we naturally extend its application to the case of inferences on new constraints. We give examples for polynomials and first-order logic by showing how new constraints can be checked on the basis of given premises and data samples. Interestingly, this gives rise to a perceptual logic scheme in which the inference mechanisms do not rely only on formal schemes, but also on the data probability distribution. It is claimed that when using a properly relaxed computational checking approach, the complementary role of data samples makes it possible to break the complexity barriers of related formal checking mechanisms.


international conference on artificial neural networks | 2009

Semi---supervised Learning with Constraints for Multi---view Object Recognition

Stefano Melacci; Marco Maggini; Marco Gori

In this paper we present a novel approach to multi---view object recognition based on kernel methods with constraints. Differently from many previous approaches, we describe a system that is able to exploit a set of views of an input object to recognize it. Views are acquired by cameras located around the object and each view is modeled by a specific classifier. The relationships among different views are formulated as constraints that are exploited by a sort of collaborative learning process. The proposed approach applies the constraints on unlabeled data in a semi---supervised framework. The results collected on the COIL benchmark show that constraint based learning can improve the quality of the recognition system and of each single classifier, both on the original and noisy data, and it can increase the invariance with respect to object orientation.


IEEE Transactions on Neural Networks | 2015

Learning With Mixed Hard/Soft Pointwise Constraints

Giorgio Gnecco; Marco Gori; Stefano Melacci; Marcello Sanguineti

A learning paradigm is proposed and investigated, in which the classical framework of learning from examples is enhanced by the introduction of hard pointwise constraints, i.e., constraints imposed on a finite set of examples that cannot be violated. Such constraints arise, e.g., when requiring coherent decisions of classifiers acting on different views of the same pattern. The classical examples of supervised learning, which can be violated at the cost of some penalization (quantified by the choice of a suitable loss function) play the role of soft pointwise constraints. Constrained variational calculus is exploited to derive a representer theorem that provides a description of the functional structure of the optimal solution to the proposed learning paradigm. It is shown that such an optimal solution can be represented in terms of a set of support constraints, which generalize the concept of support vectors and open the doors to a novel learning paradigm, called support constraint machines. The general theory is applied to derive the representation of the optimal solution to the problem of learning from hard linear pointwise constraints combined with soft pointwise constraints induced by supervised examples. In some cases, closed-form optimal solutions are obtained.


artificial neural networks in pattern recognition | 2008

A Neural Network Approach to Similarity Learning

Stefano Melacci; Lorenzo Sarti; Marco Maggini; Monica Bianchini

This paper presents a novel neural network model, called similarity neural network (SNN), designed to learn similarity measures for pairs of patterns. The model guarantees to compute a non negative and symmetric measure, and shows good generalization capabilities even if a very small set of supervised examples is used for training. Preliminary experiments, carried out on some UCI datasets, are presented, showing promising results.


international conference on artificial neural networks | 2013

Variational Foundations of Online Backpropagation

Salvatore Frandina; Marco Gori; Marco Lippi; Marco Maggini; Stefano Melacci

On-line Backpropagation has become very popular and it has been the subject of in-depth theoretical analyses and massive experimentation. Yet, after almost three decades from its publication, it is still surprisingly the source of tough theoretical questions and of experimental results that are somewhat shrouded in mystery. Although seriously plagued by local minima, the batch-mode version of the algorithm is clearly posed as an optimization problem while, in spite of its effectiveness, in many real-world problems the on-line mode version has not been given a clean formulation, yet. Using variational arguments, in this paper, the on-line formulation is proposed as the minimization of a classic functional that is inspired by the principle of minimal action in analytic mechanics. The proposed approach clashes sharply with common interpretations of on-line learning as an approximation of batch-mode, and it suggests that processing data all at once might be just an artificial formulation of learning that is hopeless in difficult real-world problems.


Neural Networks | 2012

Learning from pairwise constraints by Similarity Neural Networks

Marco Maggini; Stefano Melacci; Lorenzo Sarti

In this paper we present Similarity Neural Networks (SNNs), a neural network model able to learn a similarity measure for pairs of patterns, exploiting a binary supervision on their similarity/dissimilarity relationships. Pairwise relationships, also referred to as pairwise constraints, generally contain less information than class labels, but, in some contexts, are easier to obtain from human supervisors. The SNN architecture guarantees the basic properties of a similarity measure (symmetry and non negativity) and it can deal with non-transitivity of the similarity criterion. Unlike the majority of the metric learning algorithms proposed so far, it can model non-linear relationships among data still providing a natural out-of-sample extension to novel pairs of patterns. The theoretical properties of SNNs and their application to Semi-Supervised Clustering are investigated. In particular, we introduce a novel technique that allows the clustering algorithm to compute the optimal representatives of a data partition by means of backpropagation on the input layer, biased by a L(2) norm regularizer. An extensive set of experimental results are provided to compare SNNs with the most popular similarity learning algorithms. Both on benchmarks and real world data, SNNs and SNN-based clustering show improved performances, assessing the advantage of the proposed neural network approach to similarity measure learning.

Collaboration


Dive into the Stefano Melacci's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marcello Pelillo

Ca' Foscari University of Venice

View shared research outputs
Researchain Logo
Decentralizing Knowledge