Brett Bethke
Massachusetts Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Brett Bethke.
IEEE Control Systems Magazine | 2008
Jonathan P. How; Brett Bethke; Adrian Frank; D. Dale; John Vian
To investigate and develop unmanned vehicle systems technologies for autonomous multiagent mission platforms, we are using an indoor multivehicle testbed called real-time indoor autonomous vehicle test environment (RAVEN) to study long-duration multivehicle missions in a controlled environment. Normally, demonstrations of multivehicle coordination and control technologies require that multiple human operators simultaneously manage flight hardware, navigation, control, and vehicle tasking. However, RAVEN simplifies all of these issues to allow researchers to focus, if desired, on the algorithms associated with high-level tasks. Alternatively, RAVEN provides a facility for testing low-level control algorithms on both fixed- and rotary-wing aerial platforms. RAVEN is also being used to analyze and implement techniques for embedding the fleet and vehicle health state (for instance, vehicle failures, refueling, and maintenance) into UAV mission planning. These characteristics facilitate the rapid prototyping of new vehicle configurations and algorithms without requiring a redesign of the vehicle hardware. This article describes the main components and architecture of RAVEN and presents recent flight test results illustrating the applications discussed above.
AIAA Guidance, Navigation, and Control Conference and Exhibit | 2006
Mario Valenti; Brett Bethke; Gaston A. Fiore; Jonathan P. How; Eric Feron
This paper presents flight tests of a unique indoor, multi-vehicle testbed that was developed to study long duration UAV missions in a controlled environment. This testbed uses real hardware to examine research questions related to single and multi-vehicle health management, such as vehicle failures, refueling, and maintenance. The primary goal of the project is to embed health management into the full UAV planning system, thereby leading to improved overall mission performance, even when using simple aircraft that are prone to failures. The testbed has both aerial and ground vehicles that operate autonomously in a large test region and can be used to execute many different mission scenarios. The success of this testbed is largely related to our choice of vehicles, sensors, and the system’s command and control architecture, which has resulted in a testbed that is very simple to operate. This paper discusses this testbed infrastructure and presents flight test results from some of our most recent singleand multi-vehicle experiments.
IEEE Robotics & Automation Magazine | 2008
Brett Bethke; Mario Valenti; Jonathan P. How
Unmanned aerial vehicles (UAVs) are becoming vital warfare and homeland security platforms because they have the potential to significantly reduce cost and risk to human life while amplifying warfighter and first-responder capabilities. This article builds on the very active area of planning and control for autonomous multiagent systems. This work represents a step toward enabling robust decision making for distributed autonomous UAVs by improving the teams operational reliability and capabilities through better system self-awareness and adaptive mission planning. The health-aware task assignment algorithm developed in this article was demonstrated to be effective both in simulation and flight experiments.
Lecture Notes in Control and Information Sciences | 2007
Brett Bethke; Mario Valenti; Jonathan P. How
Unmanned aerial vehicles (UAVs) are excellent platforms for detecting and tracking objects of interest on or near the ground due to their vantage point and freedom of movement. This paper presents a cooperative vision-based estimation and tracking system that can be used in such situations. The method is shown to give better results than could be achieved with a single UAV, while being robust to failures. In addition, this method can be used to detect, estimate and track the location and velocity of objects in three dimensions. This real-time, vision-based estimation and tracking algorithm is computationally efficient and can be naturally distributed among multiple UAVs. This chapter includes the derivation of this algorithm and presents flight results from several real-time estimation and tracking experiments conducted on MIT’s Real-time indoor Autonomous Vehicle test ENvironment (RAVEN).
american control conference | 2007
Mario Valenti; Brett Bethke; Jonathan P. How; Daniela Pucci de Farias; John Vian
Coordinated multi-vehicle autonomous systems can provide incredible functionality, but off-nominal conditions and degraded system components can render this capability ineffective. This paper presents techniques to improve mission-level functional reliability through better system self-awareness and adaptive mission planning. In particular, we extend the traditional definition of health management, which has historically referred to the process of actively monitoring and managing vehicle sub-systems (e.g., avionics) in the event of component failures, to the context of multiple vehicle operations and autonomous multi-agent teams. In this case, health management information about each mission system component is used to improve the mission systems self-awareness and adapt vehicle, guidance, task and mission plans. This paper presents the theoretical foundations of our approach and recent experimental results on a new UAV testbed.
american control conference | 2008
Brett Bethke; Jonathan P. How; John Vian
Unmanned aerial vehicles (UAVs) are well-suited to a wide range of mission scenarios, such as search and rescue, border patrol, and military surveillance. The complex and distributed nature of these missions often requires teams of UAVs to work together. Furthermore, overall mission performance can be strongly influenced by vehicle failures or degradations, so an autonomous mission system must account for the possibility of these anomalies if it is to maximize performance. This paper presents a general health management methodology for designing mission systems that can anticipate the negative effects of various types of anomalies on the future mission state and choose actions that mitigate those effects. The formulation is then specialized to the problem of providing persistent surveillance coverage using a group of UAVs, where uncertain fuel usage dynamics and strong interdependence effects between vehicles must be considered. Finally, the paper presents results showing that the health-aware persistent surveillance planner based on this formulation exhibits excellent performance in both simulated and real flight test experiments.
international conference on robotics and automation | 2007
Mario Valenti; Brett Bethke; D. Dale; Adrian Frank; James S. McGrew; Spencer Ahrens; Jonathan P. How; John Vian
This paper and video present the components and flight tests of an indoor, multi-vehicle testbed that was developed to study long duration UAV missions in a controlled environment. This testbed is designed to use real hardware to examine research questions related to single- and multi-vehicle health management, such as vehicle failures, refueling, and maintenance. The testbed has both aerial and ground vehicles that operate autonomously in a large, indoor flight test area and can be used to execute many different mission scenarios. The success of this testbed is largely related to our choice of vehicles, sensors, and the systems command and control architecture. The video presents flight test results from single- and multi-vehicle experiments over the past year.
AIAA Guidance, Navigation, and Control Conference | 2009
Brett Bethke; Jonathan P. How; John Vian
This paper presents an extended formulation of the persistent surveillance problem first proposed in [1]. The extended formulation incorporates new communication constraints and a stochastic sensor failure model, in addition to modeling stochastic fuel flow dynamics and the basic constraints of providing surveillance coverage using a team of unmanned vehicles. Using a parallel, distributed implementation of an approximate dynamic programming algorithm, an approximate policy for the persistent surveillance problem can be quickly computed. Simulation analysis of this policy indicates that it correctly coordinates the actions of the team of UAVs to simultaneously provide reliable surveillance coverage and communications over the course of the mission, and appropriately retasks UAVs to maintain these services in the event of sensor failures.
AIAA Guidance, Navigation and Control Conference and Exhibit | 2008
Brett Bethke; Luca F. Bertuccelli; Jonathan P. How
Markov decision processes (MDPs) are a natural framework for solving multiagent planning problems since they can model stochastic system dynamics and interdependencies between agents. In these approaches, accurate modeling of the system in question is important, since mismodeling may lead to severely degraded performance (i.e. loss of vehicles). Furthermore, in many problems of interest, it may be dicult or impossible to obtain an accurate model before the system begins operating; rather, the model must be estimated online. Therefore, an adaptation mechanism that can estimate the system model and adjust the system control policy online can improve performance over a static (o-line) approach. This paper presents an MDP formulation of a multi-agent persistent surveillance problem and shows, in simulation, the importance of accurate modeling of the system. An adaptation mechanism, consisting of a Bayesian model estimator and a continuouslyrunning MDP solver, is then discussed. Finally, we present hardware flight results from the MIT RAVEN testbed that clearly demonstrate the performance benefits of this adaptive approach in the persistent surveillance problem.
american control conference | 2009
Brett Bethke; Jonathan P. How
This paper presents an approximate policy iteration algorithm for solving infinite-horizon, discounted Markov decision processes (MDPs) for which a model of the system is available. The algorithm is similar in spirit to Bellman residual minimization methods. However, by using Gaussian process regression with nondegenerate kernel functions as the underlying cost-to-go function approximation architecture, the algorithm is able to explicitly construct cost-to-go solutions for which the Bellman residuals are identically zero at a set of chosen sample states. For this reason, we have named our approach Bellman residual elimination (BRE). Since the Bellman residuals are zero at the sample states, our BRE algorithm can be proven to reduce to exact policy iteration in the limit of sampling the entire state space. Furthermore, the algorithm can automatically optimize the choice of any free kernel parameters and provide error bounds on the resulting cost-to-go solution. Computational results on a classic reinforcement learning problem indicate that the algorithm yields a high-quality policy and cost approximation.