Abhinav Valada
University of Freiburg
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Abhinav Valada.
international conference on robotics and automation | 2016
Gabriel L. Oliveira; Abhinav Valada; Claas Bollen; Wolfram Burgard; Thomas Brox
This paper addresses the problem of human body part segmentation in conventional RGB images, which has several applications in robotics, such as learning from demonstration and human-robot handovers. The proposed solution is based on Convolutional Neural Networks (CNNs). We present a network architecture that assigns each pixel to one of a predefined set of human body part classes, such as head, torso, arms, legs. After initializing weights with a very deep convolutional network for image classification, the network can be trained end-to-end and yields precise class predictions at the original input resolution. Our architecture particularly improves on over-fitting issues in the up-convolutional part of the network. Relying only on RGB rather than RGB-D images also allows us to apply the approach outdoors. The network achieves state-of-the-art performance on the PASCAL Parts dataset. Moreover, we introduce two new part segmentation datasets, the Freiburg sitting people dataset and the Freiburg people in disaster dataset. We also present results obtained with a ground robot and an unmanned aerial vehicle.
international symposium on experimental robotics | 2016
Abhinav Valada; Gabriel L. Oliveira; Thomas Brox; Wolfram Burgard
Semantic scene understanding of unstructured environments is a highly challenging task for robots operating in the real world. Deep Convolutional Neural Network architectures define the state of the art in various segmentation tasks. So far, researchers have focused on segmentation with RGB data. In this paper, we study the use of multispectral and multimodal images for semantic segmentation and develop fusion architectures that learn from RGB, Near-InfraRed channels, and depth data. We introduce a first-of-its-kind multispectral segmentation benchmark that contains 15, 000 images and 366 pixel-wise ground truth annotations of unstructured forest environments. We identify new data augmentation strategies that enable training of very deep models using relatively small datasets. We show that our UpNet architecture exceeds the state of the art both qualitatively and quantitatively on our benchmark. In addition, we present experimental results for segmentation under challenging real-world conditions. Benchmark and demo are publicly available at http://deepscene.cs.uni-freiburg.de.
international conference on robotics and automation | 2016
Federico Boniardi; Abhinav Valada; Wolfram Burgard; Gian Diego Tipaldi
Hand-Drawn sketches are natural means by which abstract descriptions of environments can be provided. They represent weak prior information about the scene, thereby enabling a robot to perform autonomous navigation and exploration when a full metrical description of the environment is not available beforehand. In this paper, we present an extensive evaluation of our navigation system that uses a sketch interface to allow the operator of a robot to draw a rough map of an indoor environment as well as a desired trajectory for the robot to follow. We employ a theoretical framework for sketch interpretation, in which associations between the sketch and the real world are modeled as local deformations of a suitable metric manifold. We investigate the effectiveness of our system and present empirical results from a set of experiments in real-world scenarios, focusing both on the navigation capabilities and the usability of the interface.
ISRR (2) | 2018
Abhinav Valada; Luciano Spinello; Wolfram Burgard
In order for robots to efficiently navigate in real-world environments, they need to be able to classify and characterize terrain for safe navigation. The majority of techniques for terrain classification is predominantly based on using visual features. However, as vision-based approaches are severely affected by appearance variations and occlusions, relying solely on them incapacitates the ability to function robustly in all conditions. In this paper, we propose an approach that uses sound from vehicle-terrain interactions for terrain classification. We present a new convolutional neural network architecture that learns deep features from spectrograms of extensive audio signals, gathered from interactions with various indoor and outdoor terrains. Using exhaustive experiments, we demonstrate that our network significantly outperforms classification approaches using traditional audio features by achieving state of the art performance. Additional experiments reveal the robustness of the network in situations corrupted with varying amounts of white Gaussian noise and that fine-tuning with noise-augmented samples significantly boosts the classification rate. Furthermore, we demonstrate that our network performs exceptionally well even with samples recorded with a low-quality mobile phone microphone that adds substantial amount of environmental noise.
international conference on robotics and automation | 2017
Abhinav Valada; Johan Vertens; Ankit Dhall; Wolfram Burgard
Robust scene understanding of outdoor environments using passive optical sensors is a onerous and essential task for autonomous navigation. The problem is heavily characterized by changing environmental conditions throughout the day and across seasons. Robots should be equipped with models that are impervious to these factors in order to be operable and more importantly to ensure safety in the real-world. In this paper, we propose a novel semantic segmentation architecture and the convoluted mixture of deep experts (CMoDE) fusion technique that enables a multi-stream deep neural network to learn features from complementary modalities and spectra, each of which are specialized in a subset of the input space. Our model adaptively weighs class-specific features of expert networks based on the scene condition and further learns fused representations to yield robust segmentation. We present results from experimentation on three publicly available datasets that contain diverse conditions including rain, summer, winter, dusk, fall, night and sunset, and show that our approach exceeds the state-of-the-art. In addition, we evaluate the performance of autonomously traversing several kilometres of a forested environment using only the segmentation for perception.
The International Journal of Robotics Research | 2017
Abhinav Valada; Wolfram Burgard
Terrain classification is a critical component of any autonomous mobile robot system operating in unknown real-world environments. Over the years, several proprioceptive terrain classification techniques have been introduced to increase robustness or act as a fallback for traditional vision based approaches. However, they lack widespread adaptation due to various factors that include inadequate accuracy, robustness and slow run-times. In this paper, we use vehicle-terrain interaction sounds as a proprioceptive modality and propose a deep long-short term memory based recurrent model that captures both the spatial and temporal dynamics of such a problem, thereby overcoming these past limitations. Our model consists of a new convolution neural network architecture that learns deep spatial features, complemented with long-short term memory units that learn complex temporal dynamics. Experiments on two extensive datasets collected with different microphones on various indoor and outdoor terrains demonstrate state-of-the-art performance compared to existing techniques. We additionally evaluate the performance in adverse acoustic conditions with high-ambient noise and propose a noise-aware training scheme that enables learning of more generalizable models that are essential for robust real-world deployments.
international conference on robotics and automation | 2018
Abhinav Valada; Noha Radwan; Wolfram Burgard
intelligent robots and systems | 2017
Johan Vertens; Abhinav Valada; Wolfram Burgard
robotics science and systems | 2016
Abhinav Valada; Gabriel L. Oliveira; Thomas Brox; Wolfram Burgard
Archive | 2015
Federico Boniardi; Abhinav Valada; Wolfram Burgard; Gian Diego Tipaldi