Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alberto Pretto is active.

Publication


Featured researches published by Alberto Pretto.


IEEE Transactions on Robotics | 2006

Omnidirectional vision scan matching for robot localization in dynamic environments

Emanuele Menegatti; Alberto Pretto; Alberto Scarpa; Enrico Pagello

The localization problem for an autonomous robot moving in a known environment is a well-studied problem which has seen many elegant solutions. Robot localization in a dynamic environment populated by several moving obstacles, however, is still a challenge for research. In this paper, we use an omnidirectional camera mounted on a mobile robot to perform a sort of scan matching. The omnidirectional vision system finds the distances of the closest color transitions in the environment, mimicking the way laser rangefinders detect the closest obstacles. The similarity of our sensor with classical rangefinders allows the use of practically unmodified Monte Carlo algorithms, with the additional advantage of being able to easily detect occlusions caused by moving obstacles. The proposed system was initially implemented in the RoboCup Middle-Size domain, but the experiments we present in this paper prove it to be valid in a general indoor environment with natural color transitions. We present localization experiments both in the RoboCup environment and in an unmodified office environment. In addition, we assessed the robustness of the system to sensor occlusions caused by other moving robots. The localization system runs in real-time on low-cost hardware.


international conference on robotics and automation | 2009

A visual odometry framework robust to motion blur

Alberto Pretto; Emanuele Menegatti; Wolfram Burgard; Enrico Pagello

Motion blur is a severe problem in images grabbed by legged robots and, in particular, by small humanoid robots. Standard feature extraction and tracking approaches typically fail when applied to sequences of images strongly affected by motion blur. In this paper, we propose a new feature detection and tracking scheme that is robust even to non-uniform motion blur. Furthermore, we developed a framework for visual odometry based on features extracted out of and matched in monocular image sequences. To reliably extract and track the features, we estimate the point spread function (PSF) of the motion blur individually for image patches obtained via a clustering technique and only consider highly distinctive features during matching. We present experiments performed on standard datasets corrupted with motion blur and on images taken by a camera mounted on walking small humanoid robots to show the effectiveness of our approach. The experiments demonstrate that our technique is able to reliably extract and match features and that it is furthermore able to generate a correct visual odometry, even in presence of strong motion blur effects and without the aid of any inertial measurement sensor.


international conference on robotics and automation | 2014

A Robust and Easy to Implement Method for IMU Calibration without External Equipments

David Tedaldi; Alberto Pretto; Emanuele Menegatti

Motion sensors as inertial measurement units (IMU) are widely used in robotics, for instance in the navigation and mapping tasks. Nowadays, many low cost micro electro mechanical systems (MEMS) based IMU are available off the shelf, while smartphones and similar devices are almost always equipped with low-cost embedded IMU sensors. Nevertheless, low cost IMUs are affected by systematic error given by imprecise scaling factors and axes misalignments that decrease accuracy in the position and attitudes estimation. In this paper, we propose a robust and easy to implement method to calibrate an IMU without any external equipment. The procedure is based on a multi-position scheme, providing scale and misalignments factors for both the accelerometers and gyroscopes triads, while estimating the sensor biases. Our method only requires the sensor to be moved by hand and placed in a set of different, static positions (attitudes). We describe a robust and quick calibration protocol that exploits an effective parameterless static filter to reliably detect the static intervals in the sensor measurements, where we assume local stability of the gravitys magnitude and stable temperature. We first calibrate the accelerometers triad taking measurement samples in the static intervals. We then exploit these results to calibrate the gyroscopes, employing a robust numerical integration technique. The performances of the proposed calibration technique has been successfully evaluated via extensive simulations and real experiments with a commercial IMU provided with a calibration certificate as reference data.


intelligent robots and systems | 2004

Testing omnidirectional vision-based Monte Carlo localization under occlusion

Emanuele Menegatti; Alberto Pretto; Enrico Pagello

One of the most challenging issues in mobile robot navigation is the localization problem in densely populated environments. In this paper, we present a new approach for vision-based localization able to solve this problem. The omnidirectional camera is used as a range finder sensitive to the distance of color transitions, whereas classical range finder;, like lasers or sonars, are sensitive to the distance of the nearest obstacles. The well-known Monte-Carlo localization technique was adapted for this new type of range sensor. The system runs in real time on a low-cost pc. In this paper we present experiments, performed in a crowded RoboCup middle-size field, proving the robustness of the approach to the occlusions of the vision sensor by moving obstacles (e.g other robots); occlusions that are very likely to occur in a real environment. Although, the system was implemented for the RoboCup environment, the system can be used in more general environments.


global communications conference | 2010

Autonomous discovery, localization and recognition of smart objects through WSN and image features

Emanuele Menegatti; Matteo Danieletto; Marco Mina; Alberto Pretto; Andrea Bardella; Stefano Zanconato; Pietro Zanuttigh; Andrea Zanella

This paper presents a framework that enables the interaction of robotic systems and wireless sensor network technologies for discovering, localizing and recognizing a number of smart objects (SO) placed in an unknown environment. Starting with no a priori knowledge of the environment, the robot will progressively build a virtual reconstruction of the surroundings in three phases: first, it discovers the SOs located in the area by using radio communication; second, it performs a rough localization of the SOs by using a range-only SLAM algorithm based on the RSSI-range measurements; third, it refines the SOs localization by comparing the descriptors extracted from the images acquired by the onboard camera with those transmitted by the motes attached to the SOs. Experimental results show how the combined use of the RSSI data and of the image features allows to discover and localize the SOs located in the environment with a good accuracy.


conference on automation science and engineering | 2013

Flexible 3D localization of planar objects for industrial bin-picking with monocamera vision system

Alberto Pretto; Stefano Tonello; Emanuele Menegatti

In this paper, we present a robust and flexible vision system for 3D localization of planar parts for industrial robots. Our system is able to work with nearly any object with planar shape, randomly placed inside a standard industrial bin or on a conveyor belt. Differently from most systems based on 2D image analysis, which usually can manage parts disposed in single layers, our approach can estimate the 6 degrees of freedom (DoF) pose of planar objects from a single 2D image. The choice of a single camera solution makes our system cheaper and faster with respect to systems using expensive industrial 3D cameras, or laser triangulation systems, or laser range finders. Our system can work virtually with any planar piece, without changing the software parameters, because the input for the recognition and localization algorithm is the CAD data of the planar part. The localization software is based on a two step strategy: i) a candidates selection step based on a well-engineered voting scheme ii) a refinement and best match selection step based on a robust iterative optimize-and-score procedure. During this second step, we employ a novel strategy we called search-in-the-stack that avoids the optimization from being stuck on local minima (representing false positives) created when objects are almost regularly stacked. Our system is currently installed in seven real world industrial plants, with different setups, working with hundreds of different models and successfully guiding the manipulators to pick several hundreds of thousands of pieces per year. In the experiment section, we report statistics about our system at work in real production plants on more than 60000 cycles.


international conference on robotics and automation | 2014

Unsupervised intrinsic and extrinsic calibration of a camera-depth sensor couple

Filippo Basso; Alberto Pretto; Emanuele Menegatti

The availability of affordable depth sensors in conjunction with common RGB cameras (even in the same device, e.g. the Microsoft Kinect) provides robots with a complete and instantaneous representation of both the appearance and the 3D structure of the current surrounding environment. This type of information enables robots to safely navigate, perceive and actively interact with other agents inside the working environment. It is clear that, in order to obtain a reliable and accurate representation, not only the intrinsic parameters of each sensors should be precisely calibrated, but also the extrinsic parameters relating the two sensors should be precisely known. In this paper, we propose a human-friendly and reliable calibration framework, that enables to easily estimate both the intrinsic and extrinsic parameters of a camera-depth sensor couple. Real world experiments using a Kinect show improvements for both the 3D structure estimation and the association tasks.


international conference on robotics and automation | 2011

Omnidirectional dense large-scale mapping and navigation based on meaningful triangulation

Alberto Pretto; Emanuele Menegatti; Enrico Pagello

In this work, we propose a robust and efficient method to build dense 3D maps, using only the images grabbed by an omnidirectional camera. The map contains exhaustive information about both the structure and the appearance of the environment and it is well suited also for large scale environments.


international conference on intelligent autonomous systems | 2016

Fast and Accurate Crop and Weed Identification with Summarized Train Sets for Precision Agriculture

Ciro Potena; Daniele Nardi; Alberto Pretto

In this paper we present a perception system for agriculture robotics that enables an unmanned ground vehicle (UGV) equipped with a multi spectral camera to automatically perform the crop/weed detection and classification tasks in real-time. Our approach exploits a pipeline that includes two different convolutional neural networks (CNNs) applied to the input RGB+near infra-red (NIR) images. A lightweight CNN is used to perform a fast and robust, pixel-wise, binary image segmentation, in order to extract the pixels that represent projections of 3D points that belong to green vegetation. A deeper CNN is then used to classify the extracted pixels between the crop and weed classes. A further important contribution of this work is a novel unsupervised dataset summarization algorithm that automatically selects from a large dataset the most informative subsets that better describe the original one. This enables to streamline and speed-up the manual dataset labeling process, otherwise extremely time consuming, while preserving good classification performance. Experiments performed on different datasets taken from a real farm robot confirm the effectiveness of our approach.


robot soccer world cup | 2005

A new omnidirectional vision sensor for monte-carlo localization

Emanuele Menegatti; Alberto Pretto; Enrico Pagello

In this paper, we present a new approach for omnidirectional vision-based self-localization in the RoboCup Middle-Size League. The omnidirectional vision sensor is used as a range finder (like a laser or a sonar) sensitive to colors transitions instead of nearest obstacles. This makes it possible to have a more reach information about the environment, because it is possible to discriminate between different objects painted in different colors. We implemented a Monte-Carlo localization system slightly adapted to this new type of range sensor. The system runs in real time on a low-cost pc. Experiments demonstrated the robustness of the approach. Event if the system was implemented and tested in the RoboCup Middle-Size field, the system could be used in other environments.

Collaboration


Dive into the Alberto Pretto's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ciro Potena

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Daniele Nardi

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Giorgio Grisetti

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marco Imperoli

Sapienza University of Rome

View shared research outputs
Researchain Logo
Decentralizing Knowledge