Nicola Greggio
Instituto Superior Técnico
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nicola Greggio.
machine vision applications | 2012
Nicola Greggio; Alexandre Bernardino; Cecilia Laschi; Paolo Dario; José Santos-Victor
The expectation maximization algorithm has been classically used to find the maximum likelihood estimates of parameters in probabilistic models with unobserved data, for instance, mixture models. A key issue in such problems is the choice of the model complexity. The higher the number of components in the mixture, the higher will be the data likelihood, but also the higher will be the computational burden and data overfitting. In this work, we propose a clustering method based on the expectation maximization algorithm that adapts online the number of components of a finite Gaussian mixture model from multivariate data or method estimates the number of components and their means and covariances sequentially, without requiring any careful initialization. Our methodology starts from a single mixture component covering the whole data set and sequentially splits it incrementally during expectation maximization steps. The coarse to fine nature of the algorithm reduce the overall number of computations to achieve a solution, which makes the method particularly suited to image segmentation applications whenever computational time is an issue. We show the effectiveness of the method in a series of experiments and compare it with a state-of-the-art alternative technique both with synthetic data and real images, including experiments with images acquired from the iCub humanoid robot.
intelligent systems design and applications | 2010
Nicola Greggio; Alexandre Bernardino; Cecilia Laschi; Paolo Dario; José Santos-Victor
The usage of Gaussian mixture models for video segmentation has been widely adopted. However, the main difficulty arises in choosing the best model complexity. High complex models can describe the scene accurately, but they come with a high computational requirements, too. Low complex models promote segmentation speed, with the drawback of a less exhaustive description. In this paper we propose an algorithm that first learns a description mixture for the first video frames, and then it uses these results as a starting point for the analysis of the further frames. Then, we apply it to a video sequence and show its effectiveness for real-time tracking multiple moving objects. Moreover, we integrated this procedure into a foreground/background subtraction statistical framework. We compare our procedure against the state-of-the-art alternatives, and we show both its initialization efficacy and its improved segmentation performance.
Journal of Intelligent and Robotic Systems | 2011
Nicola Greggio; Alexandre Bernardino; Cecilia Laschi; José Santos-Victor; Paolo Dario
Visual pattern recognition is a basic capability of many species in nature. The skill of visually recognizing and distinguishing different objects in the surrounding environment gives rise to the development of sensory-motor maps in the brain, with the consequent capability of object reaching and manipulation. This paper presents the implementation of a real-time tracking algorithm for following and evaluating the 3D position of a generic spatial object. The key issue of our approach is the development of a new algorithm for pattern recognition in machine vision, the Least Constrained Square-Fitting of Ellipses (LCSE), which improves the state of the art ellipse fitting procedures. It is a robust and direct method for the least-square fitting of ellipses to scattered data. In this work we applied it to the iCub humanoid robotics platform simulator and real robot. We used it as a base for a circular object localization within the 3D surrounding space. We compared its performance with the Hough Transform and the state of the art ellipse fitting algorithms, in terms of robustness (succes/failure in the object detection) and fitting precision. Our experiments involve robustness against noise, occlusion, and computational complexities analyses.
international conference on tools with artificial intelligence | 2010
Nicola Greggio; Alexandre Bernardino; Cecilia Laschi; Paolo Dario; José Santos-Victor
In this paper we propose a new algorithm for the least square fitting of ellipses from scattered data. Originally based on the one proposed by Fitzgibbon et Al in 1999, our procedure is able to overcome the numerical instability of that algorithm. We test our approach versus the latter and another approach with different ellipses. Then, we present and discuss our results.
ieee-ras international conference on humanoid robots | 2008
Nicola Greggio; Luigi Manfredi; Cecilia Laschi; Paolo Dario; Maria Chiara Carrozza
This paper presents the implementation of a new algorithm for pattern recognition in machine vision developed in our laboratory applied to the RobotCub humanoid robotics platform simulator. The algorithm is a robust and direct method for the least-square fitting of ellipses to scattered data. RobotCub is an open source platform, born to study the development of neuro-scientific and cognitive skills in human beings, especially in children. By the estimation of the surrounding objects properties (such as dimensions, distances, etc...) a subject can create a topographic map of the environment, in order to navigate through it without colliding with obstacles. In this work we implemented the method of the least-square fitting of ellipses of Maini (EDFE), previously developed in our laboratory, in a robotics context. Moreover, we compared its performance with the hough transform, and others least-square ellipse fittings techniques. We used our system to detect spherical objects, and we applied it to the simulated RobotCub platform. We performed several tests to prove the robustness of the algorithm within the overall system, and finally we present our results.
Robotics and Autonomous Systems | 2014
Nicola Greggio; Alexandre Bernardino; Paolo Dario; José Santos-Victor
Unsupervised data clustering can be addressed by the estimation of mixture models, where the mixture components are associated to clusters in data space. In this paper we present a novel unsupervised classification algorithm based on the simultaneous estimation of the mixtures parameters and the number of components (complexity). Its distinguishing aspect is the way the data space is searched. Our algorithm starts from a single component covering all the input space and iteratively splits components according to breadth first search on a binary tree structure that provides an efficient exploration of the possible solutions. The proposed scheme demonstrates important computational savings with respect to other state-of-the-art algorithms, making it particularly suited to scenarios where the performance time is an issue, such as in computer and robot vision applications. The initialization procedure is unique, allowing a deterministic evolution of the algorithm, while the parameter estimation is performed with a modification of the Expectation Maximization algorithm. To compare models with different complexity we use the Minimum Message Length information criteria that implement the trade-off between the number of components and data fit log-likelihood. We validate our new approach with experiments on synthetic data, and we test and compare to related approaches its computational efficiency in data-intensive image segmentation applications.
international conference on image analysis and recognition | 2010
Nicola Greggio; Alexandre Bernardino; José Santos-Victor
Image segmentation is a critical low-level visual routine for robot perception. However, most image segmentation approaches are still too slow to allow real-time robot operation. In this paper we explore a new method for image segmentation based on the expectation maximization algorithm applied to Gaussian Mixtures. Our approach is fully automatic in the choice of the number of mixture components, the initialization parameters and the stopping criterion. The rationale is to start with a single Gaussian in the mixture, covering the whole data set, and split it incrementally during expectation maximization steps until a good data likelihood is reached. Singe the method starts with a single Gaussian, it is more computationally efficient that others, especially in the initial steps. We show the effectiveness of the method in a series of simulated experiments both with synthetic and real images, including experiments with the iCub humanoid robot.
Archive | 2011
Nicola Greggio; Alexandre Bernardino; José Santos-Victor
In this work we propose a clustering algorithm that learns on-line a finite gaussian mixture model from multivariate data based on the expectation maximization approach. The convergence of the right number of components as well as their means and covariances is achieved without requiring any careful initialization. Our methodology starts from a single mixture component covering the whole data set and sequentially splits it incrementally during the expectation maximization steps. Once the stopping criteria has been reached, the classical EM algorithm with the best selected mixture is run in order to optimize the solution. We show the effectiveness of the method in a series of simulated experiments and compare in with a state-of-the-art alternative technique both with synthetic data and real images, including experiments with the iCub humanoid robot.
international conference on tools with artificial intelligence | 2010
Nicola Greggio; Alexandre Bernardino; Cecilia Laschi; Paolo Dario; José Santos-Victor
This work deals with a new technique for the estimation of the parameters and number of components in a finite mixture model. The learning procedure is performed by means of a expectation maximization (EM) methodology. The key feature of our approach is related to a top-down hierarchical search for the number of components, together with the integration of the model selection criterion within a modified EM procedure, used for the learning the mixture parameters. We start with a single component covering the whole data set. Then new components are added and optimized to best cover the data. The process is recursive and builds a binary tree like structure that effectively explores the search space. We show that our approach is faster that state-of-the- art alternatives, is insensitive to initialization, and has better data fits in average. We elucidate this through a series of experiments, both with synthetic and real data.
soft computing | 2018
Nicola Greggio
Nowadays, the worldwide Internet diffusion has made sharing information a piece of cake. This leads to the problem of protecting this huge amount of information (private data, commercial or strategic information) from those not authorized. A way for protecting it is the adoption of intrusion detection systems, which reveal whether an attacker is violating an information system. In this work, an algorithm for identifying anomalies on network traffic has been studied and developed. This is based on the unsupervised fitting of a set of network data by means of finite Gaussian mixtures models. Its key feature is the online selection of the number of mixture components together with the fitting parameter of each component. The best compromise between the description accuracy (many components) and the computational complexity (few components) is given by a derivation of the minimum message length criterion. The normal network behavior is assumed to be interpreted by the cluster with the highest covariance matrix, while the other smaller components are considered representing anomalies. We tested our technique with the well-known KDD99 Cup data set, in order to clearly compare our findings with the ones presenting the state of the art. Our results show the effectiveness of this approach, while encouraging for further improvements.