Marija Popovic
ETH Zurich
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Marija Popovic.
intelligent robots and systems | 2017
Marija Popovic; Teresa A. Vidal-Calleja; Gregory Hitz; Inkyu Sa; Roland Siegwart; Juan I. Nieto
Unmanned aerial vehicles (UAVs) can offer timely and cost-effective delivery of high-quality sensing data. However, deciding when and where to take measurements in complex environments remains an open challenge. To address this issue, we introduce a new multiresolution mapping approach for informative path planning in terrain monitoring using UAVs. Our strategy exploits the spatial correlation encoded in a Gaussian Process model as a prior for Bayesian data fusion with probabilistic sensors. This allows us to incorporate altitude-dependent sensor models for aerial imaging and perform constant-time measurement updates. The resulting maps are used to plan information-rich trajectories in continuous 3-D space through a combination of grid search and evolutionary optimization. We evaluate our framework on the application of agricultural biomass monitoring. Extensive simulations show that our planner performs better than existing methods, with mean error reductions of up to 45% compared to traditional “lawnmower” coverage. We demonstrate proof of concept using a multirotor to map color in different environments.
international conference on robotics and automation | 2017
Marija Popovic; Gregory Hitz; Juan I. Nieto; Inkyu Sa; Roland Siegwart; Enric Galceran
In this paper, we introduce an informative path planning (IPP) framework for active classification using unmanned aerial vehicles (UAVs). Our algorithm uses a combination of global viewpoint selection and evolutionary optimization to refine the planned trajectory in continuous 3D space while satisfying dynamic constraints. Our approach is evaluated on the application of weed detection for precision agriculture. We model the presence of weeds on farmland using an occupancy grid and generate adaptive plans according to information-theoretic objectives, enabling the UAV to gather data efficiently. We validate our approach in simulation by comparing against existing methods, and study the effects of different planning strategies. Our results show that the proposed algorithm builds maps with over 50% lower entropy compared to traditional “lawnmower” coverage in the same amount of time. We demonstrate the planning scheme on a multirotor platform with different artificial farmland set-ups.
IEEE Robotics & Automation Magazine | 2018
Inkyu Sa; Mina Kamel; Michael Burri; Michael Bloesch; Raghav Khanna; Marija Popovic; Juan I. Nieto; Roland Siegwart
This article describes an approach to building a cost-effective and research-grade visual-inertial (VI) odometry-aided vertical takeoff and landing (VTOL) platform. We utilize an off-the-shelf VI sensor, an onboard computer, and a quadrotor platform, all of which are factory calibrated and mass produced, thereby sharing similar hardware and sensor specifications [e.g., mass, dimensions, intrinsic and extrinsic of camera-inertial measurement unit (IMU) systems, and signal-to-noise ratio]. We then perform system calibration and identification, enabling the use of our VI odometry, multisensor fusion (MSF), and model predictive control (MPC) frameworks with off-the-shelf products. This approach partially circumvents the tedious parameter-tuning procedures required to build a full system. The complete system is extensively evaluated both indoors using a motioncapture system and outdoors using a laser tracker while performing hover and step responses and trajectory-following tasks in the presence of external wind disturbances. We achieve root-mean-square (rms) pose errors of 0.036 m with respect to reference hover trajectories. We also conduct relatively long distance (.180 m) experiments on a farm site, demonstrating a 0.82% drift error of the total flight distance. This article conveys the insights we acquired about the platform and sensor module and offers open-source code with tutorial documentation to the community.
Archive | 2019
Christos Papachristos; Mina Kamel; Marija Popovic; Shehryar Khattak; Andreas Bircher; Helen Oleynikova; Tung Dang; Frank Mascarich; Kostas Alexis; Roland Siegwart
This use case chapter presents a set of algorithms for the problems of autonomous exploration, terrain monitoring and optimized inspection path planning using aerial robots. The autonomous exploration algorithms described employ a receding horizon structure to iteratively derive the action that the robot should take to optimally explore its environment when no prior map is available, with the extension to localization uncertainty–aware planning. Terrain monitoring is tackled by a finite–horizon informative planning algorithm that further respects time budget limitations. For the problem of optimized inspection with a model of the environment known a priori, an offline path planning algorithm is proposed. All methods proposed are characterized by computational efficiency and have been tested thoroughly via multiple experiments. The Robot Operating System corresponds to the common middleware for the outlined family of methods. By the end of this chapter, the reader should be able to use the open–source contributions of the algorithms presented, implement them from scratch, or modify them to further fit the needs of a particular autonomous exploration, terrain monitoring, or structural inspection mission using aerial robots. Four different open–source ROS packages (compatible with ROS Indigo, Jade and Kinetic) are released, while the repository https://github.com/unr-arl/informative-planning stands as a single point of reference for all of them.
field and service robotics | 2018
Inkyu Sa; Mina Kamel; Raghav Khanna; Marija Popovic; Juan I. Nieto; Roland Siegwart
This paper describes dynamic system identification, and full control of a cost-effective Multi-rotor micro-aerial vehicle (MAV). The dynamics of the vehicle and autopilot controllers are identified using only a built-in IMU and utilized to design a subsequent model predictive controller (MPC). Experimental results for the control performance are evaluated using a motion capture system while performing hover, step responses, and trajectory following tasks in the presence of external wind disturbances. We achieve root-mean-square (RMS) errors between the reference and actual trajectory of x \(=\) 0.021 m, y \(=\) 0.016 m, z \(=\) 0.029 m, roll \(=\) 0.392\(^\circ \), pitch \(=\) 0.618\(^\circ \), and yaw \(=\) 1.087\(^\circ \) while performing hover. Although we utilize accurate state estimation provided from a motion capture system in an indoor environment, the proposed method is one of the non-trivial prerequisites to build any field or service aerial robots. This paper also conveys the insights we have gained about the commercial vehicle and returned to the community through an open-source code, and documentation.
field and service robotics | 2018
Amedeo Rodi Vetrella; Inkyu Sa; Marija Popovic; Raghav Khanna; Juan I. Nieto; Giancarmine Fasano; Domenico Accardo; Roland Siegwart
In many unmanned aerial vehicle (UAV) applications, flexible trajectory generation algorithms are required to enable high levels of autonomy for critical mission phases, such as take-off, area coverage, and landing. In this paper, we present a guidance approach which uses the improved intrinsic tau guidance theory to create spatio-temporal 4-D trajectories for a desired time-to-contact with a landing platform tracked by a visual sensor. This allows us to perform maneuvers with tunable trajectory profiles, while catering for static or non-static starting and terminating motion states. We validate our method in both simulations and real platform experiments by using rotary-wing UAVs to land on static platforms. Results show that our method achieves smooth landings within 10 cm accuracy, with easily adjustable trajectory parameters.
Remote Sensing | 2018
Inkyu Sa; Marija Popovic; Raghav Khanna; Philipp Lottes; Frank Liebisch; Juan I. Nieto; Cyrill Stachniss; Achim Walter; Roland Siegwart
We present a novel weed segmentation and mapping framework that processes multispectral images obtained from an unmanned aerial vehicle (UAV) using a deep neural network (DNN). Most studies on crop/weed semantic segmentation only consider single images for processing and classification. Images taken by UAVs often cover only a few hundred square meters with either color only or color and near-infrared (NIR) channels. Computing a single large and accurate vegetation map (e.g., crop/weed) using a DNN is non-trivial due to difficulties arising from: (1) limited ground sample distances (GSDs) in high-altitude datasets, (2) sacrificed resolution resulting from downsampling high-fidelity images, and (3) multispectral image alignment. To address these issues, we adopt a stand sliding window approach that operates on only small portions of multispectral orthomosaic maps (tiles), which are channel-wise aligned and calibrated radiometrically across the entire map. We define the tile size to be the same as that of the DNN input to avoid resolution loss. Compared to our baseline model (i.e., SegNet with 3 channel RGB inputs) yielding an area under the curve (AUC) of [background=0.607, crop=0.681, weed=0.576], our proposed model with 9 input channels achieves [0.839, 0.863, 0.782]. Additionally, we provide an extensive analysis of 20 trained models, both qualitatively and quantitatively, in order to evaluate the effects of varying input channels and tunable network hyperparameters. Furthermore, we release a large sugar beet/weed aerial dataset with expertly guided annotations for further research in the fields of remote sensing, precision agriculture, and agricultural robotics.
Journal of Field Robotics | 2018
Rik Bähnemann; Michael Pantic; Marija Popovic; Dominik Schindler; Marco Tranzatto; Mina Kamel; Marius Grimm; Jakob Widauer; Roland Siegwart; Juan I. Nieto
This article describes the hardware and software systems of the Micro Aerial Vehicle (MAV) platforms used by the ETH Zurich team in the 2017 Mohamed Bin Zayed International Robotics Challenge (MBZIRC). The aim was to develop robust outdoor platforms with the autonomous capabilities required for the competition, by applying and integrating knowledge from various fields, including computer vision, sensor fusion, optimal control, and probabilistic robotics. This paper presents the major components and structures of the system architectures, and reports on experimental findings for the MAV-based challenges in the competition. Main highlights include securing second place both in the individual search, pick, and place task of Challenge 3 and the Grand Challenge, with autonomous landing executed in less than one minute and a visual servoing success rate of over 90% for object pickups.
international symposium on experimental robotics | 2016
Marija Popovic; Gregory Hitz; Juan I. Nieto; Roland Siegwart; Enric Galceran
arXiv: Robotics | 2017
Inkyu Sa; Mina Kamel; Raghav Khanna; Marija Popovic; Juan I. Nieto; Roland Siegwart