Daniel Magree
Georgia Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Daniel Magree.
Journal of Field Robotics | 2013
Girish Chowdhary; Eric N. Johnson; Daniel Magree; Allen D. Wu; Andy Shein
GPS-denied closed-loop autonomous control of unstable Unmanned Aerial Vehicles (UAVs) such as rotorcraft using information from a monocular camera has been an open problem. Most proposed Vision aided Inertial Navigation Systems (V-INSs) have been too computationally intensive or do not have sufficient integrity for closed-loop flight. We provide an affirmative answer to the question of whether V-INSs can be used to sustain prolonged real-world GPS-denied flight by presenting a V-INS that is validated through autonomous flight-tests over prolonged closed-loop dynamic operation in both indoor and outdoor GPS-denied environments with two rotorcraft unmanned aircraft systems (UASs). The architecture efficiently combines visual feature information from a monocular camera with measurements from inertial sensors. Inertial measurements are used to predict frame-to-frame transition of online selected feature locations, and the difference between predicted and observed feature locations is used to bind in real-time the inertial measurement unit drift, estimate its bias, and account for initial misalignment errors. A novel algorithm to manage a library of features online is presented that can add or remove features based on a measure of relative confidence in each feature location. The resulting V-INS is sufficiently efficient and reliable to enable real-time implementation on resource-constrained aerial vehicles. The presented algorithms are validated on multiple platforms in real-world conditions: through a 16-min flight test, including an autonomous landing, of a 66 kg rotorcraft UAV operating in an unconctrolled outdoor environment without using GPS and through a Micro-UAV operating in a cluttered, unmapped, and gusty indoor environment.
international conference on unmanned aircraft systems | 2013
Daniel Magree; John G. Mooney; Eric N. Johnson
An unmanned aerial vehicle requires adequate knowledge of its surroundings in order to operate in close proximity to obstacles. UAVs also have strict payload and power constraints which limit the number and variety of sensors available to gather this information. It is desirable, therefore, to enable a UAV to gather information about potential obstacles or interesting landmarks using common and lightweight sensor systems. This paper presents a method of fast terrain mapping with a monocular camera. Features are extracted from camera images and used to update a sequential extended Kalman filter. The features locations are parameterized in inverse depth to enable fast depth convergence. Converged features are added to a persistent terrain map which can be used for obstacle avoidance and additional vehicle guidance. Simulation results, results from recorded flight test data, and flight test results are presented to validate the algorithm.
AIAA Guidance, Navigation, and Control (GNC) Conference | 2013
Daniel Magree; Eric N. Johnson
The use of optical sensors for navigation on aircraft has receive much attention recently. Optical sensors provide a wealth of information about the environment and are standard payloads for many unmanned aerial vehicles (UAVs). Simultaneous localization and mapping (SLAM) algorithms using optical sensors have become computationally feasible in real time in the last ten years. However, implementations of visual SLAM navigation systems on aerial vehicles are still new and consequently are often limited to restrictive environments or idealized conditions. One example of a flight condition which can dramatically affect navigation performance is altitude. This paper seeks to examine the performance of monocular extended Kalman filter based SLAM (EKF-SLAM) navigation over a large altitude change. Simulation data is collected which illustrates the behavior of the navigation system over the altitude range. Navigation and control system parameters values are specified which improve vehicle performance across the flight conditions. Additionally, a detailed presentation of the monocular EKF-SLAM navigation system is given. Flight test results are presented on a quadrotor.
AIAA Guidance, Navigation, and Control Conference | 2015
Daniel Magree; Eric N. Johnson
Presented at the AIAA Guidance Navigation and Control Conference, Kissimmee, Florida, January 2015.
advances in computing and communications | 2014
Daniel Magree; Eric N. Johnson
As unmanned aerial vehicles are used in more environments, flexible navigation strategies are required to ensure safe and reliable operation. Operation in the presence of degraded or denied GPS signal is critical in many environments, particularly indoors, in urban canyons, and hostile areas. Two techniques, laser-based simultaneous localization and mapping (SLAM) and monocular visual SLAM, in conjunction with inertial navigation, have attracted considerable attention in the research community. This paper presents an integrated navigation system combining both visual SLAM and laser SLAM with an EKF-based inertial navigation system. The monocular visual SLAM system has fully correlated vehicle and feature states. The laser SLAM system is based on a Monte Carlo scan-to-map matching, and leverages the visual data to reduce ambiguities in the pose matching. The system is validated in full 6 degree of freedom simulation, and in flight test. A key feature of the work is that the system is validated with a controller in the navigation loop.
AIAA Guidance, Navigation, and Control Conference | 2016
Gerald J.J. van Dalen; Daniel Magree; Eric N. Johnson
This paper presents a vision-based technique to determine the absolute location of a UAV using real-time image to map alignment. Normalized Cross-Correlation (NCC) is used to compare images taken onboard against photographic maps of the area. The NCC results are used to define a probability distribution within a particle filter framework. When an absolute location solution is available, the particle filter provides absolute position updates to an underlying visual SLAM-based navigation system. This architecture allows the visual SLAM navigation solution to take advantage of absolute position information when it is available but continue to provide a relative solution when it is not. All processing is performed onboard the vehicle on a lightweight and powerful computer. For both indoor and outdoor operation, flight test results are presented. The results show very good position tracking and a bounded position variance.
ieee aerospace conference | 2016
Takuma Nakamura; Stephen Haviland; Dmitry Bershadsky; Daniel Magree; Eric N. Johnson
This paper describes the target detection and tracking architecture used by the Georgia Tech Aerial Robotics team for the American Helicopter Society (AHS) Micro Aerial Vehicle (MAV) challenge. The vision system described enables vision-aided navigation with additional abilities such as target detection and tracking all performed onboard the vehicles computer. The author suggests a robust target tracking method that does not solely depend on the image obtained from a camera, but also utilizes the other sensor outputs and runs a target location estimator. The machine learning based target identification method uses Haar-like classifiers to extract the target candidate points. The raw measurements are plugged into multiple Extended Kalman Filters (EKFs). The statistical test (Z-test) is used to bound the measurement, and solve the corresponding problem. Using Multiple EKFs allows us not only to optimally estimate the target location, but also to use the information as one of the criteria to evaluate the tracking performance. The MAV utilizes performance-based criteria that determine whether or not to initiate a maneuver such as hover or land over/on the target. The performance criteria are closed in the loop which allows the system to determine at any time whether or not to continue with the maneuver. For Vision-aided Inertial Navigation System (VINS), a corner Harris algorithm finds the feature points, and we track them using the statistical knowledge. The feature point locations are integrated in Bierman Thornton extended Kalman Filter (BTEKF) with Inertial Measurement Unit (IMU) and sonar sensor outputs to generate vehicle states: position, velocity, attitude, accelerometer and gyroscope biases. A 6-degrees-of-freedom quadrotor flight simulator is developed to test the suggested method. This paper provides the simulation results of the vision-based maneuvers: hovering over the target, and landing on the target. In addition to the simulation results, flight tests have been conducted to show and validate the system performance. The 500 gram Georgia Tech Quadrotor (GtQ)-Mini, was used for the flight tests. All processing is done onboard the vehicle and it is able to operate without human interaction. Both of the simulation and flight test results show the effectiveness of the suggested method. This system and vehicle were used for the AHS 2015 MAV Student Challenge where the GPS-denied closed-loop target search is required. The vehicle successfully found the ground target, and landed on the desired location. This paper shares the data obtained from the competition.
Archive | 2013
Kirk Y.W. Scheper; Daniel Magree; Tansel Yucelen; Gerardo De La Torre; Eric N. Johnson
Adaptive control systems have long been used to effectively control dynamical systems without excessive reliance on system models. This is due mainly to the fact that adaptive control guarantees stability, the same however, cannot be said for performance; adaptive control systems may exhibit poor tracking during transient (learning) time. This paper discusses the experimental implementation of a new architecture to model reference adaptive control, specifically, the reference system is augmented with a novel mismatch term representing the high-frequency content of the system tracking error. This mismatch term is an effective tool to remove the high frequency content of the error signal used in the adaptive element update law. The augmented architecture therefore allows high-gain adaptation without the usual side-effect of high-frequency oscillations. The proposed control architecture is validated using the Georgia Tech unmanned aerial vehicle simulation tool (GUST) and also implemented on the Georgia Tech Quadrocpoter (GTQ). It is shown that the new framework allows the system to quickly suppress the effect of uncertainty without the usual side effects of high gain adaptation such as high-frequency oscillations.
Journal of Aerospace Information Systems | 2016
Daniel Magree; Eric N. Johnson
This paper describes a novel visual simultaneous localization and mapping system based on the UD factored extended Kalman filter. A novel method for marginalizing and initializing state variables i...
AIAA Guidance, Navigation, and Control Conference | 2012
Daniel Magree; Tansel Yucelen; Eric N. Johnson
This paper presents an application of a recently developed command governor-based adaptive control framework to a high-fidelity autonomous helicopter model. This framework is based on an adaptive controller, but the proposed command governor adjusts the trajectories of a given command in order to follow an ideal reference system (capturing a desired closed-loop system behavior) both in transient-time and steady-state without resorting to high-gain learning rates in the adaptation (update) law. The high-fidelity autonomous helicopter is a six rigid body degree of freedom model, with additional engine, fuel and rotor dynamics. Non-ideal attributes of physical systems such as model uncertainty, sensor noise, and actuator dynamics are modeled to evaluate the command governor controller in realistic conditions. The proposed command governor adaptive control framework is shown to reduce attitude error with respect to a standard adaptive control scheme during vehicle maneuvers.