Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alessandro Verri is active.

Publication


Featured researches published by Alessandro Verri.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1998

Support vector machines for 3D object recognition

Massimiliano Pontil; Alessandro Verri

Support vector machines (SVMs) have been recently proposed as a new technique for pattern recognition. Intuitively, given a set of points which belong to either of two classes, a linear SVM finds the hyperplane leaving the largest possible fraction of points of the same class on the same side, while maximizing the distance of either class from the hyperplane. The hyperplane is determined by a subset of the points of the two classes, named support vectors, and has a number of interesting theoretical properties. In this paper, we use linear SVMs for 3D object recognition. We illustrate the potential of SVMs on a database of 7200 images of 100 different objects. The proposed system does not require feature extraction and performs recognition on images regarded as points of a space of high dimension without estimating pose. The excellent recognition rates achieved in all the performed experiments indicate that SVMs are well-suited for aspect-based recognition.


Biological Cybernetics | 1988

A computational approach to motion perception

Sergio Uras; F. Girosi; Alessandro Verri; V. Torre

In this paper it is shown that the computation of the optical flow from a sequence of timevarying images is not, in general, an underconstrained problem. A local algorithm for the computation of the optical flow which uses second order derivatives of the image brightness pattern, and that avoids the aperture problem, is presented. The obtained optical flow is very similar to the true motion field — which is the vector field associated with moving features on the image plane — and can be used to recover 3D motion information. Experimental results on sequences of real images, together with estimates of relevant motion parameters, like time-to-crash for translation and angular velocity for rotation, are presented and discussed. Due to the remarkable accuracy which can be achieved in estimating motion parameters, the proposed method is likely to be very useful in a number of computer vision applications.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2005

A novel kernel method for clustering

Francesco Camastra; Alessandro Verri

Kernel methods are algorithms that, by replacing the inner product with an appropriate positive definite function, implicitly perform a nonlinear mapping of the input data into a high-dimensional feature space. In this paper, we present a kernel method for clustering inspired by the classical k-means algorithm in which each cluster is iteratively refined using a one-class support vector machine. Our method, which can be easily implemented, compares favorably with respect to popular clustering algorithms, like k-means, neural gas, and self-organizing maps, on a synthetic data set and three UCI real data benchmarks (IRIS data, Wisconsin breast cancer database, Spam database).


international conference on image processing | 2003

Histogram intersection kernel for image classification

Annalisa Barla; Francesca Odone; Alessandro Verri

In this paper we address the problem of classifying images, by exploiting global features that describe color and illumination properties, and by using the statistical learning paradigm. The contribution of this paper is twofold. First, we show that histogram intersection has the required mathematical properties to be used as a kernel function for support vector machines (SVMs). Second, we give two examples of how a SVM, equipped with such a kernel, can achieve very promising results on image classification based on color information.


Neural Computation | 1998

Properties of support vector machines

Massimiliano Pontil; Alessandro Verri

Support vector machines (SVMs) perform pattern recognition between two point classes by finding a decision surface determined by certain points of the training set, termed support vectors (SV). This surface, which in some feature space of possibly infinite dimension can be regarded as a hyperplane, is obtained from the solution of a problem of quadratic programming that depends on a regularization parameter. In this article, we study some mathematical properties of support vectors and show that the decision surface can be written as the sum of two orthogonal terms, the first depending on only the margin vectors (which are SVs lying on the margin), the second proportional to the regularization parameter. For almost all values of the parameter, this enables us to predict how the decision surface varies for small parameter changes. In the special but important case of feature space of finite dimension m, we also show that there are at most m + 1 margin vectors and observe that m + 1 SVs are usually sufficient to determine the decision surface fully. For relatively small m, this latter result leads to a consistent reduction of the SV number.


Neural Computation | 2004

Are loss functions all the same

Lorenzo Rosasco; Ernesto De Vito; Andrea Caponnetto; Michele Piana; Alessandro Verri

In this letter, we investigate the impact of choosing different loss functions from the viewpoint of statistical learning theory. We introduce a convexity assumption, which is met by all loss functions commonly used in the literature, and study how the bound on the estimation error changes with the loss. We also derive a general result on the minimizer of the expected risk for a convex loss function in the case of classification. The main outcome of our analysis is that for classification, the hinge loss appears to be the loss of choice. Other things being equal, the hinge loss leads to a convergence rate practically indistinguishable from the logistic loss rate and much better than the square loss rate. Furthermore, if the hypothesis space is sufficiently rich, the bounds obtained for the hinge loss are not loosened by the thresholding stage.


Journal of The Optical Society of America A-optics Image Science and Vision | 1990

Differential techniques for optical flow

Alessandro Verri; F. Girosi; Vincent Torre

We show that optical flow, i.e., the apparent motion of the time-varying brightness over the image plane of an imaging device, can be estimated by means of simple differential techniques. Linear algebraic equations for the two components of optical flow at each image location are derived. The coefficients of these equations are combinations of spatial and temporal derivatives of the image brightness. The equations are suggested by an analogy with the theory of deformable bodies and are exactly true for particular classes of motion or elementary deformations. Locally, a generic optical flow can be approximated by using a constant term and a suitable combination of four elementary deformations of the time-varying image brightness, namely, a uniform expansion, a pure rotation, and two orthogonal components of shear. When two of the four equations that correspond to these deformations are satisfied, optical flow can more conveniently be computed by assuming that the spatial gradient of the image brightness is stationary. In this case, it is also possible to evaluate the difference between optical flow and motion field—that is, the two-dimensional vector field that is associated with the true displacement of points on the image plane. Experiments on sequences of real images are reported in which the obtained optical flows are used successfully for the estimate of three-dimensional motion parameters, the detection of flow discontinuities, and the segmentation of the image in different moving objects.


Archive | 2002

Pattern Recognition with Support Vector Machines

Seong Whan Lee; Alessandro Verri

We examine using a Support Vector Machine to predict secretory signal peptides. We predict signal peptides for both prokaryotic and eukaryotic signal organisms. Signalling peptides versus non-signaling peptides as well as cleavage sites were predicted from a sequence of amino acids. Two types of kernels (each corresponding to different metrics) were used: hamming distance, a distance based upon the percent accepted mutation (PAM) score trained on the same signal peptide data.


IEEE Transactions on Image Processing | 2005

Building kernels from binary strings for image matching

Francesca Odone; Annalisa Barla; Alessandro Verri

In the statistical learning framework, the use of appropriate kernels may be the key for substantial improvement in solving a given problem. In essence, a kernel is a similarity measure between input points satisfying some mathematical requirements and possibly capturing the domain knowledge. We focus on kernels for images: we represent the image information content with binary strings and discuss various bitwise manipulations obtained using logical operators and convolution with nonbinary stencils. In the theoretical contribution of our work, we show that histogram intersection is a Mercers kernel and we determine the modifications under which a similarity measure based on the notion of Hausdorff distance is also a Mercers kernel. In both cases, we determine explicitly the mapping from input to feature space. The presented experimental results support the relevance of our analysis for developing effective trainable systems.


Biological Cybernetics | 1993

On the use of size functions for shape analysis

Alessandro Verri; Claudio Uras; Patrizio Frosini; Massimo Ferri

According to a recent mathematical theory a shape can be represented by size functions, which convey information on both the topological and metric properties of the viewed shape. In this paper the relevance of the theory of size functions to visual perception is investigated. An algorithm for the computation of the size functions is presented, and many theoretical properties of the theory are demonstrated on real images. It is shown that the representation of shape in terms of size functions (1) can be tailored to suit the invariance of the problem at hand and (2) is stable against small qualitative and quantitative changes of the viewed shape. A distance between size functions is used as a measure of similarity between the representations of two different shapes. The results obtained indicate that size functions are likely to be very useful for object recognition. In particular, they seem to be well suited for the recognition of natural and articulated objects.

Collaboration


Dive into the Alessandro Verri's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lorenzo Rosasco

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Claudio Uras

Istituto Nazionale di Fisica Nucleare

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge