Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gabriele Costante is active.

Publication


Featured researches published by Gabriele Costante.


intelligent robots and systems | 2016

Fast robust monocular depth estimation for Obstacle Detection with fully convolutional networks

Michele Mancini; Gabriele Costante; Paolo Valigi; Thomas A. Ciarfuglia

Obstacle Detection is a central problem for any robotic system, and critical for autonomous systems that travel at high speeds in unpredictable environment. This is often achieved through scene depth estimation, by various means. When fast motion is considered, the detection range must be longer enough to allow for safe avoidance and path planning. Current solutions often make assumption on the motion of the vehicle that limit their applicability, or work at very limited ranges due to intrinsic constraints. We propose a novel appearance-based Object Detection system that is able to detect obstacles at very long range and at a very high speed (~ 300Hz), without making assumptions on the type of motion. We achieve these results using a Deep Neural Network approach trained on real and synthetic images and trading some depth accuracy for fast, robust and consistent operation. We show how photo-realistic synthetic images are able to solve the problem of training set dimension and variety typical of machine learning approaches, and how our system is robust to massive blurring of test images.


Robotics and Autonomous Systems | 2014

Evaluation of non-geometric methods for visual odometry

Thomas A. Ciarfuglia; Gabriele Costante; Paolo Valigi; Elisa Ricci

Visual Odometry (VO) is one of the fundamental building blocks of modern autonomous robot navigation and mapping. While most state-of-the-art techniques use geometrical methods for camera ego-motion estimation from optical flow vectors, in the last few years learning approaches have been proposed to solve this problem. These approaches are emerging and there is still much to explore. This work follows this track applying Kernel Machines to monocular visual ego-motion estimation. Unlike geometrical methods, learning-based approaches to monocular visual odometry allow issues like scale estimation and camera calibration to be overcome, assuming the availability of training data. While some previous works have proposed learning paradigms to VO, to our knowledge no extensive evaluation of applying kernel-based methods to Visual Odometry has been conducted. To fill this gap, in this work we consider publicly available datasets and perform several experiments in order to set a comparison baseline with traditional techniques. Experimental results show good performances of learning algorithms and set them as a solid alternative to the computationally intensive and complex to implement geometrical techniques. We stress the advantages of non-geometric (learned) VO as an alternative or an addition to standard geometric methods.Ego-motion is computed with state-of-the art regression techniques, namely Support Vector Machines (SVM) and Gaussian Processes (GP).To our knowledge this is the first time SVM have been applied to VO problem.We conduct extensive evaluation on three publicly available datasets, spanning both indoor and outdoor environments.The experiments show that non-geometric VO is a good alternative, or addition, to standard VO systems.


intelligent robots and systems | 2012

A discriminative approach for appearance based loop closing

Thomas A. Ciarfuglia; Gabriele Costante; Paolo Valigi; Elisa Ricci

The place recognition module is a fundamental component in SLAM systems, as incorrect loop closures may result in severe errors in trajectory estimation. In the case of appearance-based methods the bag-of-words approach is typically employed for recognizing locations. This paper introduces a novel algorithm for improving loop closures detection performance by adopting a set of visual words weights, learned offline accordingly to a discriminative criterion. The proposed weights learning approach, based on the large margin paradigm, can be used for generic similarity functions and relies on an efficient online leaning algorithm in the training phase. As the computed weights are usually very sparse, a gain in terms of computational cost at recognition time is also obtained. Our experiments, conducted on publicly available datasets, demonstrate that the discriminative weights lead to loop closures detection results that are more accurate than the traditional bag-of-words method and that our place recognition approach is competitive with state-of-the-art methods.


intelligent robots and systems | 2014

Personalizing vision-based gestural interfaces for HRI with UAVs: a transfer learning approach

Gabriele Costante; Enrico Bellocchio; Paolo Valigi; Elisa Ricci

Following recent works on HRI for UAVs, we present a gesture recognition system which operates on the video stream recorded from a passive monocular camera installed on a quadcopter. While many challenges must be addressed for building a real-time vision-based gestural interface, in this paper we specifically focus on the problem of user personalization. Different users tend to perform the same gesture with different styles and speed. Thus, a system trained on visual sequences depicting some users may work poorly when data from other people are available. On the other hand, collecting and annotating many user-specific data is time consuming. To avoid these issues, in this paper we propose a personalized gestural interface. We introduce a novel transfer learning algorithm which, exploiting both data downloaded from the web and gestures collected from other users, permits to learn a set of person-specific classifiers. We integrate the proposed gesture recognition module into a HRI system with a flying quadrotor robot. In our system first the UAV localizes a person and individuates her identity. Then, when a user performs a specific gesture, the system recognizes it adopting the associated user-specific classifier and the quadcopter executes the corresponding task. Our experimental evaluation demonstrates that the proposed personalized gesture recognition solution is advantageous with respect to generic ones.


international conference on robotics and automation | 2017

Toward Domain Independence for Learning-Based Monocular Depth Estimation

Michele Mancini; Gabriele Costante; Paolo Valigi; Thomas A. Ciarfuglia; Jeffrey A. Delmerico; Davide Scaramuzza

Modern autonomous mobile robots require a strong understanding of their surroundings in order to safely operate in cluttered and dynamic environments. Monocular depth estimation offers a geometry-independent paradigm to detect free, navigable space with minimum space, and power consumption. These represent highly desirable features, especially for microaerial vehicles. In order to guarantee robust operation in real-world scenarios, the estimator is required to generalize well in diverse environments. Most of the existent depth estimators do not consider generalization, and only benchmark their performance on publicly available datasets after specific fine tuning. Generalization can be achieved by training on several heterogeneous datasets, but their collection and labeling is costly. In this letter, we propose a deep neural network for scene depth estimation that is trained on synthetic datasets, which allow inexpensive generation of ground truth data. We show how this approach is able to generalize well across different scenarios. In addition, we show how the addition of long short-term memory layers in the network helps to alleviate, in sequential image streams, some of the intrinsic limitations of monocular vision, such as global scale estimation, with low computational overhead. We demonstrate that the network is able to generalize well with respect to different real-world environments without any fine tuning, achieving comparable performance to state-of-the-art methods on the KITTI dataset.


ieee international smart cities conference | 2016

SmartSEAL: A ROS based home automation framework for heterogeneous devices interconnection in smart buildings

Enrico Bellocchio; Gabriele Costante; Silvia Cascianelli; Paolo Valigi; Thomas A. Ciarfuglia

With this paper we present the SmartSEAL inter-connection system developed for the nationally founded SEAL project. SEAL is a research project aimed at developing Home Automation (HA) solutions for building energy management, user customization and improved safety of its inhabitants. One of the main problems of HA systems is the wide range of communication standards that commercial devices use. Usually this forces the designer to choose devices from a few brands, limiting the scope of the system and its capabilities. In this context, SmartSEAL is a framework that aims to integrate heterogeneous devices, such as sensors and actuators from different vendors, providing networking features, protocols and interfaces that are easy to implement and dynamically configurable. The core of our system is a Robotics middleware called Robot Operating System (ROS). We adapted the ROS features to the HA problem, designing the network and protocol architectures for this particular needs. These software infrastructure allows for complex HA functions that could be realized only levering the services provided by different devices. The system has been tested in our laboratory and installed in two real environments, Palazzo Fogazzaro in Schio and “Le Case” childhood school in Malo. Since one of the aim of the SEAL project is the personalization of the building environment according to the user needs, and the learning of their patterns of behaviour, in the final part of this work we also describe the ongoing design and experiments to provide a Machine Learning based re-identification module implemented with Convolutional Neural Networks (CNNs). The description of the adaptation module complements the description of the SmartSEAL system and helps in understanding how to develop complex HA services through it.


Costante, Gabriele; Delmerico, Jeffrey; Werlberger, Manuel; Valigi, Paolo; Scaramuzza, Davide (2017). Exploiting photometric information for planning under uncertainty. In: Bicchi, Antonio; Burgard, Wolfram. Robotics Research. Cham: Springer, 107-124. | 2018

Exploiting Photometric Information for Planning Under Uncertainty

Gabriele Costante; Jeffrey A. Delmerico; Manuel Werlberger; Paolo Valigi; Davide Scaramuzza

Vision-based localization systems rely on highly-textured areas for achieving an accurate pose estimation. However, most previous path planning strategies propose to select trajectories with minimum pose uncertainty by leveraging only the geometric structure of the scene, neglecting the photometric information (i.e, texture). Our planner exploits the scene’s visual appearance (i.e, the photometric information) in combination with its 3D geometry. Furthermore, we assume that we have no prior knowledge about the environment given, meaning that there is no pre-computed map or 3D geometry available. We introduce a novel approach to update the optimal plan on-the-fly, as new visual information is gathered. We demonstrate our approach with real and simulated Micro Aerial Vehicles (MAVs) that perform perception-aware path planning in real-time during exploration. We show significantly reduced pose uncertainty over trajectories planned without considering the perception of the robot.


Robotics and Autonomous Systems | 2017

Robust visual semi-semantic loop closure detection by a covisibility graph and CNN features ☆

Silvia Cascianelli; Gabriele Costante; Enrico Bellocchio; Paolo Valigi; Mario Luca Fravolini; Thomas A. Ciarfuglia

Abstract Visual Self-localization in unknown environments is a crucial capability for an autonomous robot. Real life scenarios often present critical challenges for autonomous vision-based localization, such as robustness to viewpoint and appearance changes. To address these issues, this paper proposes a novel strategy that models the visual scene by preserving its geometric and semantic structure and, at the same time, improves appearance invariance through a robust visual representation. Our method relies on high level visual landmarks consisting of appearance invariant descriptors that are extracted by a pre-trained Convolutional Neural Network (CNN) on the basis of image patches. In addition, during the exploration, the landmarks are organized by building an incremental covisibility graph that, at query time, is exploited to retrieve candidate matching locations improving the robustness in terms of viewpoint invariance. In this respect, through the covisibility graph, the algorithm finds, more effectively, location similarities by exploiting the structure of the scene that, in turn, allows the construction of virtual locations i.e., artificially augmented views from a real location that are useful to enhance the loop closure ability of the robot. The proposed approach has been deeply analysed and tested in different challenging scenarios taken from public datasets. The approach has also been compared with a state-of-the-art visual navigation algorithm.


ieee international smart cities conference | 2016

A robust semi-semantic approach for visual localization in urban environment

Silvia Cascianelli; Gabriele Costante; Enrico Bellocchio; Paolo Valigi; Mario Luca Fravolini; Thomas A. Ciarfuglia

This paper provides a new contribution to the problem of vision-based place recognition introducing a novel appearance and viewpoint invariant approach that guarantees robustness with respect to perceptual aliasing and kidnapping. Most of the state-of-the-art strategies rely on low level visual features and ignore the semantical structure of the scene. Thus, even small changes in the appearance of the scene (e.g., illumination conditions) cause a significant performance drop. In contrast to previous work, we propose a new strategy to model the scene by preserving its geometrical and the semantical structure and, at the same time, achieving an improved appearance invariance through a robust visual representation. In particular, to manage the perceptual aliasing problem, we introduce a covisibility graph, that connects semantical entities of the scene preserving their geometrical relations. The method relies on high level patches consisting of dense and robust descriptors that are extracted by a Convolutional Neural Network (CNN). Through the graph structure, we are able to efficiently retrieve candidate locations and to synthesize virtual locations (i.e., artificial intermediate views between two keyframes) to improve the viewpoint invariance. The proposed approach has been compared with state-of-the-art approaches in different challenging scenarios taken from public datasets.


Robotics and Autonomous Systems | 2015

Transferring knowledge across robots

Gabriele Costante; Thomas A. Ciarfuglia; Paolo Valigi; Elisa Ricci

One of the most impressive characteristics of human perception is its domain adaptation capability. Humans can recognize objects and places simply by transferring knowledge from their past experience. Inspired by that, current research in robotics is addressing a great challenge: building robots able to sense and interpret the surrounding world by reusing information previously collected, gathered by other robots or obtained from the web. But, how can a robot automatically understand what is useful among a large amount of information and perform knowledge transfer? In this paper we address the domain adaptation problem in the context of visual place recognition. We consider the scenario where a robot equipped with a monocular camera explores a new environment. In this situation traditional approaches based on supervised learning perform poorly, as no annotated data are provided in the new environment and the models learned from data collected in other places are inappropriate due to the large variability of visual information. To overcome these problems we introduce a novel transfer learning approach. With our algorithm the robot is given only some training data (annotated images collected in different environments by other robots) and is able to decide whether, and how much, this knowledge is useful in the current scenario. At the base of our approach there is a transfer risk measure which quantifies the similarity between the given and the new visual data. To improve the performance, we also extend our framework to take into account multiple visual cues. Our experiments on three publicly available datasets demonstrate the effectiveness of the proposed approach. We cast the visual place recognition problem within a transfer learning framework.A method to quantify the similarity between Source and Target data is proposed.The Kullback-Leibler divergence and the Earth Movers Distance are compared.We extend our approach to integrate informations from multiple visual cues.We perform an extensive experimental evaluation on publicly available sequences.

Collaboration


Dive into the Gabriele Costante's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge