Giulio Reina
University of Salento
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Giulio Reina.
IEEE Transactions on Robotics | 2006
Lauro Ojeda; Daniel Cruz; Giulio Reina; Johann Borenstein
This paper introduces a novel method for wheel-slippage detection and correction based on motor current measurements. Our proposed method estimates wheel slippage from motor current measurements, and adjusts encoder readings affected by wheel slippage accordingly. The correction of wheel slippage based on motor currents works only in the direction of motion, but not laterally, and it requires some knowledge of the terrain. However, this knowledge does not have to be provided ahead of time by human operators. Rather, we propose three tuning techniques for determining relevant terrain parameters automatically, in real time, and during motion over unknown terrain. Two of the tuning techniques require position ground truth (i.e., GPS) to be available either continuously or sporadically. The third technique does not require any position ground truth, but is less accurate than the two other methods. A comprehensive set of experimental results have been included to validate this approach
IEEE-ASME Transactions on Mechatronics | 2006
Giulio Reina; Lauro Ojeda; Annalisa Milella; Johann Borenstein
Mobile robots are increasingly being used in high-risk rough terrain situations, such as planetary exploration and military applications. Current control and localization algorithms are not well suited to rough terrain, since they generally do not consider the physical characteristics of the vehicle and its environment. Little attention has been devoted to the study of the dynamic effects occurring at the wheel-terrain interface, such as slip and sinkage. These effects compromise odometry accuracy, traction performance, and may even result in entrapment and consequent mission failure. This paper describes methods for wheel slippage and sinkage detection aiming at improving vehicle mobility on soft sandy terrain. Novel measures for wheel slip detection are presented based on observing different onboard sensor modalities and defining deterministic conditions that indicate vehicle slippage. An innovative vision-based algorithm for wheel sinkage estimation is discussed based on edge detection strategy. Experimental results, obtained with a Mars rover-type robot operating in high-slippage sandy environments and with a wheel sinkage testbed, are presented to validate our approach. It is shown that these techniques are effective in detecting wheel slip and sinkage.
international symposium on safety, security, and rescue robotics | 2007
Giulio Reina; Andres Vargas; Keiji Nagatani; Kazuya Yoshida
Kalman filters have been widely used for navigation in mobile robotics. One of the key problems associated with Kalman filter is how to assign suitable statistical properties to both the dynamic and the observational models. For GPS-based localization of a rough-terrain mobile robot, the maneuver of the vehicle and the level of measurement noise are environmental dependent, and hard to be predicted. This is particularly true when the vehicle experiences a sudden change of its state, which is typical on rugged terrain due, for example, to an obstacle or slippery slopes. Therefore to assign constant noise levels for such applications is not realistic. In this work we propose a real-time adaptive algorithm for GPS data processing based on the observation of residuals. Large value of residuals suggests poor performance of the filter that can be improved giving more weight to the measurements provided by the GPS using a fading memory factor. For a finer gradation of this parameter, we used a fuzzy logic inference system implementing our physical understanding of the phenomenon. The proposed approach was validated in experimental trials comparing the performance of the adaptive algorithm with a conventional Kalman filter for vehicle localization. The results demonstrate that the novel adaptive algorithm is much robust to the sudden changes of vehicle motion and measurement errors.
Sensors | 2012
Giulio Reina; Annalisa Milella
Autonomous driving is a challenging problem, particularly when the domain is unstructured, as in an outdoor agricultural setting. Thus, advanced perception systems are primarily required to sense and understand the surrounding environment recognizing artificial and natural structures, topology, vegetation and paths. In this paper, a self-learning framework is proposed to automatically train a ground classifier for scene interpretation and autonomous navigation based on multi-baseline stereovision. The use of rich 3D data is emphasized where the sensor output includes range and color information of the surrounding environment. Two distinct classifiers are presented, one based on geometric data that can detect the broad class of ground and one based on color data that can further segment ground into subclasses. The geometry-based classifier features two main stages: an adaptive training stage and a classification stage. During the training stage, the system automatically learns to associate geometric appearance of 3D stereo-generated data with class labels. Then, it makes predictions based on past observations. It serves as well to provide training labels to the color-based classifier. Once trained, the color-based classifier is able to recognize similar terrain classes in stereo imagery. The system is continuously updated online using the latest stereo readings, thus making it feasible for long range and long duration navigation, over changing environments. Experimental results, obtained with a tractor test platform operating in a rural environment, are presented to validate this approach, showing an average classification precision and recall of 91.0% and 77.3%, respectively.
Robotics and Autonomous Systems | 2012
Giulio Reina; Annalisa Milella; James Patrick Underwood
Autonomous driving is a challenging problem in mobile robotics, particularly when the domain is unstructured, as in an outdoor setting. In addition, field scenarios are often characterized by low visibility as well, due to changes in lighting conditions, weather phenomena including fog, rain, snow and hail, or the presence of dust clouds and smoke. Thus, advanced perception systems are primarily required for an off-road robot to sense and understand its environment recognizing artificial and natural structures, topology, vegetation and paths, while ensuring, at the same time, robustness under compromised visibility. In this paper the use of millimeter-wave radar is proposed as a possible solution for all-weather off-road perception. A self-learning approach is developed to train a classifier for radar image interpretation and autonomous navigation. The proposed classifier features two main stages: an adaptive training stage and a classification stage. During the training stage, the system automatically learns to associate the appearance of radar data with class labels. Then, it makes predictions based on past observations. The training set is continuously updated online using the latest radar readings, thus making it feasible to use the system for long range and long duration navigation, over changing environments. Experimental results, obtained with an unmanned ground vehicle operating in a rural environment, are presented to validate this approach. A quantitative comparison with laser data is also included showing good range accuracy and mapping ability as well. Finally, conclusions are drawn on the utility of millimeter-wave radar as a robotic sensor for persistent and accurate perception in natural scenarios.
Journal of Field Robotics | 2011
Giulio Reina; James Patrick Underwood; Graham Brooker; Hugh F. Durrant-Whyte
Autonomous vehicle operations in outdoor environments challenge robotic perception. Construction, mining, agriculture, and planetary exploration environments are examples in which the presence of dust, fog, rain, changing illumination due to low sun angles, and lack of contrast can dramatically degrade conventional stereo and laser sensing. Nonetheless, environment perception can still succeed under compromised visibility through the use of a millimeter-wave radar. Radar also allows for multiple object detection within a single beam, whereas other range sensors are limited to one target return per emission. However, radar has shortcomings as well, such as a large footprint, specularity effects, and limited range resolution, all of which may result in poor environment survey or difficulty in interpretation. This paper presents a novel method for ground segmentation using a millimeter-wave radar mounted on a ground vehicle. Issues relevant to short-range perception in an outdoor environment are described along with field experiments and a quantitative comparison to laser data. The ability to classify the ground is successfully demonstrated in clear and low-visibility conditions, and significant improvement in range accuracy is shown. Finally, conclusions are drawn on the utility of millimeter-wave radar as a robotic sensor for persistent and accurate perception in natural scenarios.
international conference on industrial technology | 2002
Angelo Gentile; Nicola Ivan Giannoccaro; Giulio Reina
A position-controlled pneumatic actuator using pulsewidth modulation (PWM) valve pulsing algorithms is described. The system consists of a standard double-acting cylinder controlled with two three-way solenoid valves through a 12-bit A/D PC board. The mechatronic system has the advantage of using on/off solenoid valves in place of more expensive servo valves and it may be applied to a variety of practical positioning applications. A proportional-integral (PI) controller with position feedforward has been successfully implemented. Several experimental tests are carried out to evaluate the robustness of the control system and performances of the novel PWM algorithm implemented. The actuators overall performance is comparable to that achieved by other researchers using servo valves.
international conference on robotics and automation | 2008
Giulio Reina; Genya Ishigami; Keiji Nagatani; Kazuya Yoshida
For a mobile robot it is critical to detect and compensate for slippage, especially when driving in rough terrain environments. Due to its highly unpredictable nature, drift largely affects the accuracy of localization and control systems, even leading, in extreme cases, to the danger of vehicle entrapment with consequent mission failure. This paper presents a novel method for lateral slip estimation based on visually observing the trace produced by the wheels of the robot, during traverse of soft, deformable terrain, as that expected for lunar and planetary rovers. The proposed algorithm uses a robust Hough transform enhanced by fuzzy reasoning to estimate the angle of inclination of the wheel trace with respect to the vehicle reference frame. Any deviation of the wheel trace from the planned path of the robot suggests occurrence of sideslip that can be detected, and more interestingly, measured. This allows one to estimate the actual heading angle of the robot, usually referred to as the slip angle. The details of the various steps of the visual algorithm are presented and the results of experimental tests performed in the field with an all- terrain rover are shown, proving the method to be effective and robust.
Advanced Robotics | 2010
Giulio Reina; Genya Ishigami; Keiji Nagatani; Kazuya Yoshida
This paper introduces a novel method for slip angle estimation based on visually observing the traces produced by the wheels of a robot on soft, deformable terrain. The proposed algorithm uses a robust Hough transform enhanced by fuzzy reasoning to estimate the angle of inclination of the wheel trace with respect to the vehicle reference frame. Any deviation of the wheel track from the planned path of the robot suggests occurrence of sideslip that can be detected and, more interestingly, measured. In turn, the knowledge of the slip angle allows encoder readings affected by wheel slip to be adjusted and the accuracy of the position estimation system to be improved, based on an integrated longitudinal and lateral wheel–terrain slip model. The description of the visual algorithm and the odometry correction method is presented, and a comprehensive set of experimental results is included to validate this approach.
Journal of Field Robotics | 2015
Annalisa Milella; Giulio Reina; James Patrick Underwood
Reliable terrain analysis is a key requirement for a mobile robot to operate safely in challenging environments, such as in natural outdoor settings. In these contexts, conventional navigation systems that assume a priori knowledge of the terrain geometric properties, appearance properties, or both, would most likely fail, due to the high variability of the terrain characteristics and environmental conditions. In this paper, a self-learning framework for ground detection and classification is introduced, where the terrain model is automatically initialized at the beginning of the vehicles operation and progressively updated online. The proposed approach is of general applicability for a robots perception purposes, and it can be implemented using a single sensor or combining different sensor modalities. In the context of this paper, two ground classification modules are presented: one based on radar data, and one based on monocular vision and supervised by the radar classifier. Both of them rely on online learning strategies to build a statistical feature-based model of the ground, and both implement a Mahalanobis distance classification approach for ground segmentation in their respective fields of view. In detail, the radar classifier analyzes radar observations to obtain an estimate of the ground surface location based on a set of radar features. The output of the radar classifier serves as well to provide training labels to the visual classification module. Once trained, the vision-based classifier is able to discriminate between ground and nonground regions in the entire field of view of the camera. It can also detect multiple terrain components within the broad ground class. Experimental results, obtained with an unmanned ground vehicle operating in a rural environment, are presented to validate the system. It is shown that the proposed approach is effective in detecting drivable surface, reaching an average classification accuracy of about 80% on the entire video frame with the additional advantage of not requiring human intervention for training or a priori assumption on the ground appearance.