Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Annalisa Milella is active.

Publication


Featured researches published by Annalisa Milella.


IEEE-ASME Transactions on Mechatronics | 2006

Wheel slippage and sinkage detection for planetary rovers

Giulio Reina; Lauro Ojeda; Annalisa Milella; Johann Borenstein

Mobile robots are increasingly being used in high-risk rough terrain situations, such as planetary exploration and military applications. Current control and localization algorithms are not well suited to rough terrain, since they generally do not consider the physical characteristics of the vehicle and its environment. Little attention has been devoted to the study of the dynamic effects occurring at the wheel-terrain interface, such as slip and sinkage. These effects compromise odometry accuracy, traction performance, and may even result in entrapment and consequent mission failure. This paper describes methods for wheel slippage and sinkage detection aiming at improving vehicle mobility on soft sandy terrain. Novel measures for wheel slip detection are presented based on observing different onboard sensor modalities and defining deterministic conditions that indicate vehicle slippage. An innovative vision-based algorithm for wheel sinkage estimation is discussed based on edge detection strategy. Experimental results, obtained with a Mars rover-type robot operating in high-slippage sandy environments and with a wheel sinkage testbed, are presented to validate our approach. It is shown that these techniques are effective in detecting wheel slip and sinkage.


Sensors | 2012

Towards Autonomous Agriculture: Automatic Ground Detection Using Trinocular Stereovision

Giulio Reina; Annalisa Milella

Autonomous driving is a challenging problem, particularly when the domain is unstructured, as in an outdoor agricultural setting. Thus, advanced perception systems are primarily required to sense and understand the surrounding environment recognizing artificial and natural structures, topology, vegetation and paths. In this paper, a self-learning framework is proposed to automatically train a ground classifier for scene interpretation and autonomous navigation based on multi-baseline stereovision. The use of rich 3D data is emphasized where the sensor output includes range and color information of the surrounding environment. Two distinct classifiers are presented, one based on geometric data that can detect the broad class of ground and one based on color data that can further segment ground into subclasses. The geometry-based classifier features two main stages: an adaptive training stage and a classification stage. During the training stage, the system automatically learns to associate geometric appearance of 3D stereo-generated data with class labels. Then, it makes predictions based on past observations. It serves as well to provide training labels to the color-based classifier. Once trained, the color-based classifier is able to recognize similar terrain classes in stereo imagery. The system is continuously updated online using the latest stereo readings, thus making it feasible for long range and long duration navigation, over changing environments. Experimental results, obtained with a tractor test platform operating in a rural environment, are presented to validate this approach, showing an average classification precision and recall of 91.0% and 77.3%, respectively.


International Journal of Advanced Robotic Systems | 2010

An Autonomous Mobile Robotic System for Surveillance of Indoor Environments

Donato Di Paola; Annalisa Milella; Grazia Cicirelli; Arcangelo Distante

The development of intelligent surveillance systems is an active research area. In this context, mobile and multi-functional robots are generally adopted as means to reduce the environment structuring and the number of devices needed to cover a given area. Nevertheless, the number of different sensors mounted on the robot, and the number of complex tasks related to exploration, monitoring, and surveillance make the design of the overall system extremely challenging. In this paper, we present our autonomous mobile robot for surveillance of indoor environments. We propose a system able to handle autonomously general-purpose tasks and complex surveillance issues simultaneously. It is shown that the proposed robotic surveillance scheme successfully addresses a number of basic problems related to environment mapping, localization and autonomous navigation, as well as surveillance tasks, like scene processing to detect abandoned or removed objects and people detection and following. The feasibility of the approach is demonstrated through experimental tests using a multisensor platform equipped with a monocular camera, a laser scanner, and an RFID device. Real world applications of the proposed system include surveillance of wide areas (e.g. airports and museums) and buildings, and monitoring of safety equipment.


Robotics and Autonomous Systems | 2012

Self-learning classification of radar features for scene understanding

Giulio Reina; Annalisa Milella; James Patrick Underwood

Autonomous driving is a challenging problem in mobile robotics, particularly when the domain is unstructured, as in an outdoor setting. In addition, field scenarios are often characterized by low visibility as well, due to changes in lighting conditions, weather phenomena including fog, rain, snow and hail, or the presence of dust clouds and smoke. Thus, advanced perception systems are primarily required for an off-road robot to sense and understand its environment recognizing artificial and natural structures, topology, vegetation and paths, while ensuring, at the same time, robustness under compromised visibility. In this paper the use of millimeter-wave radar is proposed as a possible solution for all-weather off-road perception. A self-learning approach is developed to train a classifier for radar image interpretation and autonomous navigation. The proposed classifier features two main stages: an adaptive training stage and a classification stage. During the training stage, the system automatically learns to associate the appearance of radar data with class labels. Then, it makes predictions based on past observations. The training set is continuously updated online using the latest radar readings, thus making it feasible to use the system for long range and long duration navigation, over changing environments. Experimental results, obtained with an unmanned ground vehicle operating in a rural environment, are presented to validate this approach. A quantitative comparison with laser data is also included showing good range accuracy and mapping ability as well. Finally, conclusions are drawn on the utility of millimeter-wave radar as a robotic sensor for persistent and accurate perception in natural scenarios.


Industrial Robot-an International Journal | 2008

RFID‐assisted mobile robot system for mapping and surveillance of indoor environments

Annalisa Milella; Grazia Cicirelli; Arcangelo Distante

Purpose – The purpose of this paper is to investigate the use of passive radio frequency identification (RFID) technology for environment mapping and surveillance by an autonomous mobile robot.Design/methodology/approach – Proposes a fuzzy inference method to localize RFID tags in the environment.Findings – Demonstrates that RFID technology can be successfully integrated in mobile robot systems to support navigation and provide the robot with mapping and surveillance capabilities.Originality/value – Use of fuzzy reasoning to learn the model of the RFID device and localize the tags, enhancing the capability of the system to recognize and monitor the environment.


Journal of Field Robotics | 2015

A Self-learning Framework for Statistical Ground Classification using Radar and Monocular Vision

Annalisa Milella; Giulio Reina; James Patrick Underwood

Reliable terrain analysis is a key requirement for a mobile robot to operate safely in challenging environments, such as in natural outdoor settings. In these contexts, conventional navigation systems that assume a priori knowledge of the terrain geometric properties, appearance properties, or both, would most likely fail, due to the high variability of the terrain characteristics and environmental conditions. In this paper, a self-learning framework for ground detection and classification is introduced, where the terrain model is automatically initialized at the beginning of the vehicles operation and progressively updated online. The proposed approach is of general applicability for a robots perception purposes, and it can be implemented using a single sensor or combining different sensor modalities. In the context of this paper, two ground classification modules are presented: one based on radar data, and one based on monocular vision and supervised by the radar classifier. Both of them rely on online learning strategies to build a statistical feature-based model of the ground, and both implement a Mahalanobis distance classification approach for ground segmentation in their respective fields of view. In detail, the radar classifier analyzes radar observations to obtain an estimate of the ground surface location based on a set of radar features. The output of the radar classifier serves as well to provide training labels to the visual classification module. Once trained, the vision-based classifier is able to discriminate between ground and nonground regions in the entire field of view of the camera. It can also detect multiple terrain components within the broad ground class. Experimental results, obtained with an unmanned ground vehicle operating in a rural environment, are presented to validate the system. It is shown that the proposed approach is effective in detecting drivable surface, reaching an average classification accuracy of about 80% on the entire video frame with the additional advantage of not requiring human intervention for training or a priori assumption on the ground appearance.


Robotics and Autonomous Systems | 2014

Visual ground segmentation by radar supervision

Annalisa Milella; Giulio Reina; James Patrick Underwood; Bertrand Douillard

Abstract Imaging sensors are being increasingly used in autonomous vehicle applications for scene understanding. This paper presents a method that combines radar and monocular vision for ground modeling and scene segmentation by a mobile robot operating in outdoor environments. The proposed system features two main phases: a radar-supervised training phase and a visual classification phase. The training stage relies on radar measurements to drive the selection of ground patches in the camera images, and learn online the visual appearance of the ground. In the classification stage, the visual model of the ground can be used to perform high level tasks such as image segmentation and terrain classification, as well as to solve radar ambiguities. This method leads to the following main advantages: (a) self-supervised training of the visual classifier across the portion of the environment where radar overlaps with the camera field of view. This avoids time-consuming manual labeling and enables on-line implementation; (b) the ground model can be continuously updated during the operation of the vehicle, thus making feasible the use of the system in long range and long duration applications. This paper details the algorithms and presents experimental tests conducted in the field using an unmanned vehicle.


international conference on mechatronics and machine vision in practice | 2008

Multi-Sensor Surveillance of Indoor Environments by an Autonomous Mobile Robot

Donato Di Paola; David Naso; Annalisa Milella; Grazia Cicirelli; Arcangelo Distante

In this paper, we present our autonomous mobile robotic system for surveillance of indoor environments. The robot is equipped with a monocular camera, a laser scanner, encoders, and an RFID device, connected to a multi-layer decision and control scheme. Two main functions are implemented: to build a map of the environment, identifying specific areas of interest, marked by RFID tags; to monitor the target zones to detect unexpected changes, such as object addition or removal, based on a vision scene change detector (V-SCD) and a laser scene change detector (L-SCD). Fuzzy logic is used to integrate information provided by the different sensors. Applications of the proposed system include surveillance of wide areas, such as airports and museums, and monitoring of safety equipment. The feasibility of the approach is demonstrated through experimental tests, performed in the ISSIA Mobile Robotics Laboratory of Ban, Italy. The results are promising, proving the method to be effective in detecting either new or removed objects in the surveyed scene, also in presence of relatively small viewpoint changes. It is shown that the proposed robotic surveillance system successfully addresses a number of specific problems related to environment mapping, autonomous navigation, and scene processing, and can be potentially employed in real world surveillance applications.


intelligent robots and systems | 2011

Combining radar and vision for self-supervised ground segmentation in outdoor environments

Annalisa Milella; Giulio Reina; James Patrick Underwood; Bertrand Douillard

Ground segmentation is critical for a mobile robot to successfully accomplish its tasks in challenging environments. In this paper, we propose a self-supervised radar-vision classification system that allows an autonomous vehicle, operating in natural terrains, to automatically construct online a visual model of the ground and perform accurate ground segmentation. The system features two main phases: the training phase and the classification phase. The training stage relies on radar measurements to drive the selection of ground patches in the camera images, and learn online the visual appearance of the ground. In the classification stage, the visual model of the ground can be used to perform high level tasks such as image segmentation and terrain classification, as well as to solve radar ambiguities. The proposed method leads to the following main advantages: (a) a self-supervised training of the visual classifier, where the radar allows the vehicle to automatically acquire a set of ground samples, eliminating the need for time-consuming manual labeling; (b) the ground model can be continuously updated during the operation of the vehicle, thus making it feasible the use of the system in long range and long duration navigation applications. This paper details the proposed system and presents the results of experimental tests conducted in the field by using an unmanned vehicle.


international conference on advanced intelligent mechatronics | 2007

RFID-based environment mapping for autonomous mobile robot applications

Annalisa Milella; Paolo Vanadia; Grazia Cicirelli; Arcangelo Distante

Radio Frequency Identification (RFID) is being increasingly used as an augmentation technology in the domain of environment mapping and ubiquitous computing. In this paper, we present a novel method for localizing RFID tags embedded in indoor environments, by using a mobile robot equipped with RF antennas and reader, and a laser range finder. First, a model of the RFID system is learnt, which describes the likelihood of detecting a tag, given its position relative to the robot. Then, based on this model, a fuzzy inference method is developed that allows to localize tags in the environment. Tag locations are referred to a map of the environment, which is obtained from laser range data. Results of experimental tests show that the proposed approach is accurate in localizing RFID tags and can be successfully integrated with autonomous navigation and mapping systems.

Collaboration


Dive into the Annalisa Milella's collaboration.

Top Co-Authors

Avatar

Giulio Reina

Instituto Politécnico Nacional

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Donato Di Paola

National Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Antonio Petitti

National Research Council

View shared research outputs
Top Co-Authors

Avatar

Mario M. Foglia

Instituto Politécnico Nacional

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paolo Spagnolo

National Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Angelo Gentile

Instituto Politécnico Nacional

View shared research outputs
Researchain Logo
Decentralizing Knowledge