Gianluca Donato
Microsoft
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gianluca Donato.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 1999
Gianluca Donato; Marian Stewart Bartlett; Joseph C. Hager; Paul Ekman; Terrence J. Sejnowski
The Facial Action Coding System (FACS) [23] is an objective method for quantifying facial movement in terms of component actions. This system is widely used in behavioral investigations of emotion, cognitive processes, and social interaction. The coding is presently performed by highly trained human experts. This paper explores and compares techniques for automatically recognizing facial actions in sequences of images. These techniques include analysis of facial motion through estimation of optical flow; holistic spatial analysis, such as principal component analysis, independent component analysis, local feature analysis, and linear discriminant analysis; and methods based on the outputs of local filters, such as Gabor wavelet representations and local principal components. Performance of these systems is compared to naive and expert human subjects. Best performances were obtained using the Gabor wavelet representation and the independent component representation, both of which achieved 96 percent accuracy for classifying 12 facial actions of the upper and lower face. The results provide converging evidence for the importance of using local filters, high spatial frequencies, and statistical independence for classifying facial actions.
international conference on robotics and automation | 2001
Bruno Sinopoli; Mario Micheli; Gianluca Donato; Tak-John Koo
We are developing a system for autonomous navigation of unmanned aerial vehicles (UAVs) based on computer vision. A UAV is equipped with on-board cameras and each UAV is provided with noisy estimates of its own state, coming from GPS/INS. The mission of the UAV is low altitude navigation from an initial position to a final position in a partially known 3-D environment while avoiding obstacles and minimizing path length. We use a hierarchical approach to path planning. We distinguish between a global offline computation, based on a coarse known model of the environment and a local online computation, based on the information coming from the vision system. A UAV builds and updates a virtual 3-D model of the surrounding environment by processing image sequences and fusing them with sensor data. Based on such a model the UAV will plan a path from its current position to the terminal point. It will then follow such path, getting more data from the on-board cameras, and refining map and local path in real time.
european conference on computer vision | 2002
Gianluca Donato; Serge J. Belongie
The thin plate spline (TPS) is an effective tool for modeling coordinate transformations that has been applied successfully in several computer vision applications. Unfortunately the solution requires the inversion of a p × p matrix, where p is the number of points in the data set, thus making it impractical for large scale applications. As it turns out, a surprisingly good approximate solution is often possible using only a small subset of corresponding points. We begin by discussing the obvious approach of using the subsampled set to estimate a transformation that is then applied to all the points, and we show the drawbacks of this method. We then proceed to borrow a technique from the machine learning community for function approximation using radial basis functions (RBFs) and adapt it to the task at hand. Using this method, we demonstrate a significant improvement over the naive method. One drawback of this method, however, is that is does not allow for principal warp analysis, a technique for studying shape deformations introduced by Bookstein based on the eigenvectors of the p × p bending energy matrix. To address this, we describe a third approximation method based on a classic matrix completion technique that allows for principal warp analysis as a by-product. By means of experiments on real and synthetic data, we demonstrate the pros and cons of these different approximations so as to allow the reader to make an informed decision suited to his or her application.
conference on information and knowledge management | 2009
Feng Pan; Tim Converse; David Ahn; Franco Salvetti; Gianluca Donato
Modern search engines have to be fast to satisfy users, so there are hard back-end latency requirements. The set of features useful for search ranking functions, though, continues to grow, making feature computation a latency bottleneck. As a result, not all available features can be used for ranking, and in fact, much of the time, only a small percentage of these features can be used. Thus, it is crucial to have a feature selection mechanism that can find a subset of features that both meets latency requirements and achieves high relevance. To this end, we explore different feature selection methods using boosted regression trees, including both greedy approaches (selecting the features with highest relative importance as computed by boosted trees; discounting importance by feature similarity and a randomized approach. We evaluate and compare these approaches using data from a commercial search engine. The experimental results show that the proposed randomized feature selection with feature-importance-based backward elimination outperforms greedy approaches and achieves a comparable relevance with 30 features to a full-feature model trained with 419 features and the same modeling parameters.
computer and information technology | 2011
Feng Pan; Tim Converse; David Ahn; Franco Salvetti; Gianluca Donato
Modern search engines have to be fast to satisfy users, so there are hard back-end latency requirements. The set of features useful for search ranking functions, though, continues to grow, making feature computation a latency bottleneck. As a result, not all available features can be used for ranking, and in fact, much of the time only a small percentage of these features can be used. Thus, it is crucial to have a feature selection mechanism that can find a subset of features that both meets latency requirements and achieves high relevance. To this end, we explore different feature selection methods using boosted regression trees, including both greedy approaches (i.e., selecting the features with the highest relative influence as computed by boosted trees, discounting importance by feature similarity) and randomized approaches (i.e., best-only genetic algorithm, a proposed more efficient randomized method with feature-importance-based backward elimination). We evaluate and compare these approaches using two data sets, one from a commercial Wikipedia search engine and the other from a commercial Web search engine. The experimental results show that the greedy approach that selects top features with the highest relative influence performs close to the full-feature model, and the randomized feature selection with feature-importance-based backward elimination outperforms all other randomized and greedy approaches, especially on the Wikipedia data.
Archive | 2011
Daniel Marantz; Keith Alan Regier; Tejas Nadkarni; David Ahn; Gianluca Donato
Archive | 1999
Marian Stewart Bartlett; Gianluca Donato; Javier R. Movellan; U. Padova; Joseph C. Hager; Paul Ekman; Terrence J. Sejnowski
Archive | 2002
Gianluca Donato; Serge J. Belongie
neural information processing systems | 1999
Marian Stewart Bartlett; Gianluca Donato; Javier R. Movellan; Joseph C. Hager; Paul Ekman; Terrence J. Sejnowski
Archive | 2007
Justin Denney; Jason Wodicka; Gianluca Donato