Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Emanuele Menegatti is active.

Publication


Featured researches published by Emanuele Menegatti.


Robotics and Autonomous Systems | 2004

Image-based memory for robot navigation using properties of omnidirectional images

Emanuele Menegatti; Takeshi Maeda; Hiroshi Ishiguro

Abstract This paper proposes a new technique for vision-based robot navigation. The basic framework is to localise the robot by comparing images taken at its current location with reference images stored in its memory. In this work, the only sensor mounted on the robot is an omnidirectional camera. The Fourier components of the omnidirectional image provide a signature for the views acquired by the robot and can be used to simplify the solution to the robot navigation problem. The proposed system can calculate the robot position with variable accuracy (‘hierarchical localisation’) saving computational time when the robot does not need a precise localisation (e.g. when it is travelling through a clear space). In addition, the system is able to self-organise its visual memory of the environment. The self-organisation of visual memory is essential to realise a fully autonomous robot that is able to navigate in an unexplored environment. Experimental evidence of the robustness of this system is given in unmodified office environments.


Robotics and Autonomous Systems | 2004

Image-Based Monte-Carlo Localisation with Omnidirectional Images

Emanuele Menegatti; Mauro Zoccarato; Enrico Pagello; Hiroshi Ishiguro

Monte Carlo localisation generally requires a metrical map of the environment to calculate a robots position from the posterior probability density of a set of weighted samples. Image-based localisation, which matches a robots current view of the environment with reference views, fails in environments with perceptual aliasing. The method we present in this paper is experimentally demonstrated to overcome these disadvantages in a large indoor environment by combining Monte Carlo and image-based localisation. It exploits the properties of the Fourier transform of omnidirectional images, while weighting the samples according to the similarity among images. We also introduce a novel strategy for solving the “kidnapped robot problem”.


intelligent robots and systems | 2012

Tracking people within groups with RGB-D data

Matteo Munaro; Filippo Basso; Emanuele Menegatti

This paper proposes a very fast and robust multi-people tracking algorithm suitable for mobile platforms equipped with a RGB-D sensor. Our approach features a novel depth-based sub-clustering method explicitly designed for detecting people within groups or near the background and a three-term joint likelihood for limiting drifts and ID switches. Moreover, an online learned appearance classifier is proposed, that robustly specializes on a track while using the other detections as negative examples. Tests have been performed with data acquired from a mobile robot in indoor environments and on a publicly available dataset acquired with three RGB-D sensors and results have been evaluated with the CLEAR MOT metrics. Our method reaches near state of the art performance and very high frame rates in our distributed ROS-based CPU implementation.


international conference on robotics and automation | 2009

Range-only SLAM with a mobile robot and a Wireless Sensor Networks

Emanuele Menegatti; Andrea Zanella; Stefano Zilli; Francesco Zorzi; Enrico Pagello

This paper presents the localization of a mobile robot while simultaneously mapping the position of the nodes of aWireless Sensor Network using only range measurements. The robot can estimate the distance to nearby nodes of the Wireless Sensor Network by measuring the Received Signal Strength Indicator (RSSI) of the received radio messages. The RSSI measure is very noisy, especially in an indoor environment due to interference and reflections of the radio signals. We adopted an Extended Kalman Filter SLAM algorithm to integrate RSSI measurements from the different nodes over time, while the robot moves in the environment. A simple pre-processing filter helps in reducing the RSSI variations due to interference and reflections. Successful experiments are reported in which an average localization error less than 1 m is obtained when the SLAM algorithm has no a priori knowledge on the wireless node positions, while a localization error less than 0.5 m can be achieved when the position of the node is initialized close to the their actual position. These results are obtained using a generic path loss model for the trasmission channel. Moreover, no internode communication is necessary in the WSN. This can save energy and enables to apply the proposed system also to fully disconnected networks


IEEE Transactions on Robotics | 2006

Bayesian inference in the space of topological maps

Ananth Ranganathan; Emanuele Menegatti; Frank Dellaert

While probabilistic techniques have previously been investigated extensively for performing inference over the space of metric maps, no corresponding general-purpose methods exist for topological maps. We present the concept of probabilistic topological maps (PTMs), a sample-based representation that approximates the posterior distribution over topologies, given available sensor measurements. We show that the space of topologies is equivalent to the intractably large space of set partitions on the set of available measurements. The combinatorial nature of the problem is overcome by computing an approximate, sample-based representation of the posterior. The PTM is obtained by performing Bayesian inference over the space of all possible topologies, and provides a systematic solution to the problem of perceptual aliasing in the domain of topological mapping. In this paper, we describe a general framework for modeling measurements, and the use of a Markov-chain Monte Carlo algorithm that uses specific instances of these models for odometry and appearance measurements to estimate the posterior distribution. We present experimental results that validate our technique and generate good maps when using odometry and appearance, derived from panoramic images, as sensor measurements.


IEEE Transactions on Robotics | 2006

Omnidirectional vision scan matching for robot localization in dynamic environments

Emanuele Menegatti; Alberto Pretto; Alberto Scarpa; Enrico Pagello

The localization problem for an autonomous robot moving in a known environment is a well-studied problem which has seen many elegant solutions. Robot localization in a dynamic environment populated by several moving obstacles, however, is still a challenge for research. In this paper, we use an omnidirectional camera mounted on a mobile robot to perform a sort of scan matching. The omnidirectional vision system finds the distances of the closest color transitions in the environment, mimicking the way laser rangefinders detect the closest obstacles. The similarity of our sensor with classical rangefinders allows the use of practically unmodified Monte Carlo algorithms, with the additional advantage of being able to easily detect occlusions caused by moving obstacles. The proposed system was initially implemented in the RoboCup Middle-Size domain, but the experiments we present in this paper prove it to be valid in a general indoor environment with natural color transitions. We present localization experiments both in the RoboCup environment and in an unmodified office environment. In addition, we assessed the robustness of the system to sensor occlusions caused by other moving robots. The localization system runs in real-time on low-cost hardware.


Autonomous Robots | 2014

Fast RGB-D people tracking for service robots

Matteo Munaro; Emanuele Menegatti

Service robots have to robustly follow and interact with humans. In this paper, we propose a very fast multi-people tracking algorithm designed to be applied on mobile service robots. Our approach exploits RGB-D data and can run in real-time at very high frame rate on a standard laptop without the need for a GPU implementation. It also features a novel depth-based sub-clustering method which allows to detect people within groups or even standing near walls. Moreover, for limiting drifts and track ID switches, an online learning appearance classifier is proposed featuring a three-term joint likelihood. We compared the performances of our system with a number of state-of-the-art tracking algorithms on two public datasets acquired with three static Kinects and a moving stereo pair, respectively. In order to validate the 3D accuracy of our system, we created a new dataset in which RGB-D data are acquired by a moving robot. We made publicly available this dataset which is not only annotated by hand, but the ground-truth position of people and robot are acquired with a motion capture system in order to evaluate tracking accuracy and precision in 3D coordinates. Results of experiments on these datasets are presented, showing that, even without the need for a GPU, our approach achieves state-of-the-art accuracy and superior speed.


international conference on robotics and automation | 2009

A visual odometry framework robust to motion blur

Alberto Pretto; Emanuele Menegatti; Wolfram Burgard; Enrico Pagello

Motion blur is a severe problem in images grabbed by legged robots and, in particular, by small humanoid robots. Standard feature extraction and tracking approaches typically fail when applied to sequences of images strongly affected by motion blur. In this paper, we propose a new feature detection and tracking scheme that is robust even to non-uniform motion blur. Furthermore, we developed a framework for visual odometry based on features extracted out of and matched in monocular image sequences. To reliably extract and track the features, we estimate the point spread function (PSF) of the motion blur individually for image patches obtained via a clustering technique and only consider highly distinctive features during matching. We present experiments performed on standard datasets corrupted with motion blur and on images taken by a camera mounted on walking small humanoid robots to show the effectiveness of our approach. The experiments demonstrate that our technique is able to reliably extract and match features and that it is furthermore able to generate a correct visual odometry, even in presence of strong motion blur effects and without the aid of any inertial measurement sensor.


PLOS ONE | 2013

Different approaches for extracting information from the co-occurrence matrix.

Loris Nanni; Sheryl Brahnam; Stefano Ghidoni; Emanuele Menegatti; Tonya Barrier

In 1979 Haralick famously introduced a method for analyzing the texture of an image: a set of statistics extracted from the co-occurrence matrix. In this paper we investigate novel sets of texture descriptors extracted from the co-occurrence matrix; in addition, we compare and combine different strategies for extending these descriptors. The following approaches are compared: the standard approach proposed by Haralick, two methods that consider the co-occurrence matrix as a three-dimensional shape, a gray-level run-length set of features and the direct use of the co-occurrence matrix projected onto a lower dimensional subspace by principal component analysis. Texture descriptors are extracted from the co-occurrence matrix evaluated at multiple scales. Moreover, the descriptors are extracted not only from the entire co-occurrence matrix but also from subwindows. The resulting texture descriptors are used to train a support vector machine and ensembles. Results show that our novel extraction methods improve the performance of standard methods. We validate our approach across six medical datasets representing different image classification problems using the Wilcoxon signed rank test. The source code used for the approaches tested in this paper will be available at: http://www.dei.unipd.it/wdyn/?IDsezione=3314&IDgruppo_pass=124&preview=.


Proceedings of the IEEE | 2006

Cooperation Issues and Distributed Sensing for Multirobot Systems

Enrico Pagello; Antonio D'Angelo; Emanuele Menegatti

This paper considers the properties a multirobot system should exhibit to perform an assigned task cooperatively. Our experiments regard specifically the domain of RoboCup middle-size league (MSL) competitions. But the illustrated techniques can be usefully applied also to other service robotics fields like, for example, videosurveillance. Two issues are addressed in the paper. The former refers to the problem of dynamic role assignment in a team of robots. The latter concerns the problem of sharing the sensory information to cooperatively track moving objects. Both these problems have been extensively investigated over the past years by the MSL robot teams. In our paper, each individual robot has been designed to become reactively aware of the environment configuration. In addition, a dynamic role assignment policy among teammates is activated, based on the knowledge about the best behavior that the team is able to acquire through the shared sensorial information. We present the successful performance of the Artisti Veneti robot team at the MSL Challenge competitions of RoboCup-2003 to show the effectiveness of our proposed hybrid architecture, as well as some tests run in laboratory to validate the omnidirectional distributed vision system which allows us to share the information gathered by the omnidirectional cameras of our robots

Collaboration


Dive into the Emanuele Menegatti's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge