Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lars Blackmore is active.

Publication


Featured researches published by Lars Blackmore.


IEEE Transactions on Robotics | 2010

A Probabilistic Particle-Control Approximation of Chance-Constrained Stochastic Predictive Control

Lars Blackmore; Masahiro Ono; Brian C. Williams

Robotic systems need to be able to plan control actions that are robust to the inherent uncertainty in the real world. This uncertainty arises due to uncertain state estimation, disturbances, and modeling errors, as well as stochastic mode transitions such as component failures. Chance-constrained control takes into account uncertainty to ensure that the probability of failure, due to collision with obstacles, for example, is below a given threshold. In this paper, we present a novel method for chance-constrained predictive stochastic control of dynamic systems. The method approximates the distribution of the system state using a finite number of particles. By expressing these particles in terms of the control variables, we are able to approximate the original stochastic control problem as a deterministic one; furthermore, the approximation becomes exact as the number of particles tends to infinity. This method applies to arbitrary noise distributions, and for systems with linear or jump Markov linear dynamics, we show that the approximate problem can be solved using efficient mixed-integer linear-programming techniques. We also introduce an important weighting extension that enables the method to deal with low-probability mode transitions such as failures. We demonstrate in simulation that the new method is able to control an aircraft in turbulence and can control a ground vehicle while being robust to brake failures.


Journal of Guidance Control and Dynamics | 2010

Minimum-Landing-Error Powered-Descent Guidance for Mars Landing Using Convex Optimization

Lars Blackmore; Behcet Acikmese; Daniel P. Scharf

To increase the science return of future missions to Mars and to enable sample return missions, the accuracy with which a lander can be deliverer to the Martian surface must be improved by orders of magnitude. The prior work developed a convex-optimization-based minimum-fuel powered-descent guidance algorithm. In this paper, this convex-optimization-based approach is extended to handle the case when no feasible trajectory to the target exists. In this case, the objective is to generate the minimum-landing-error trajectory, which is the trajectory that minimizes the distance to the prescribed target while using the available fuel optimally. This problem is inherently a nonconvex optimal control problem due to a nonzero lower bound on the magnitude of the feasible thrust vector. It is first proven that an optimal solution of a convex relaxation of the problem is also optimal for the original nonconvex problem, which is referred to as a lossless convexification of the original nonconvex problem. Then it is shown that the minimum-landing-error trajectory generation problem can be posed as a convex optimization problem and solved to global optimality with known bounds on convergence time. This makes the approach amenable to onboard implementation for real-time applications.


american control conference | 2006

A probabilistic approach to optimal robust path planning with obstacles

Lars Blackmore; Hui Li; Brian C. Williams

Autonomous vehicles need to plan trajectories to a specified goal that avoid obstacles. Previous approaches that used a constrained optimization approach to solve for finite sequences of optimal control inputs have been highly effective. For robust execution, it is essential to take into account the inherent uncertainty in the problem, which arises due to uncertain localization, modeling errors, and disturbances. Prior work has handled the case of deterministically bounded uncertainty. We present here an alternative approach that uses a probabilistic representation of uncertainty, and plans the future probabilistic distribution of the vehicle state so that the probability of collision with obstacles is below a specified threshold. This approach has two main advantages; first, uncertainty is often modeled more naturally using a probabilistic representation (for example in the case of uncertain localization); second, by specifying the probability of successful execution, the desired level of conservatism in the plan can be specified in a meaningful manner. The key idea behind the approach is that the probabilistic obstacle avoidance problem can be expressed as a disjunctive linear program using linear chance constraints. The resulting disjunctive linear program has the same complexity as that corresponding to the deterministic path planning problem with no representation of uncertainty. Hence the resulting problem can be solved using existing, efficient techniques, such that planning with uncertainty requires minimal additional computation. Finally, we present an empirical validation of the new method with a number of aircraft obstacle avoidance scenarios


AIAA Guidance, Navigation, and Control Conference | 2009

Convex Chance Constrained Predictive Control without Sampling

Lars Blackmore; Masahiro Ono

In this paper we consider finite-horizon predictive control of dynamic systems subject to stochastic uncertainty; such uncertainty arises due to exogenous disturbances, model- ing errors, and sensor noise. Stochastic robustness is typically defined using chance constraints, which require that the pro b- ability of state constraints being violated is below a presc ribed value. Prior work showed that in the case of linear system dynamics, Gaussian noise and convex state constraints, optimal chance- constrained predictive control results in a convex optimization problem. Solving this problem in practice, however, requires the evaluation of multivariate Gaussian densities through sampling, which is time-consuming and inaccurate. We propose a new approach to chance-constrained predictive control that does not require the evaluation of multivariate densities. We use a new bounding approach to ensure that chance constraints are satisfied, while showing empiricall y that the conservatism introduced is small. This is in contrast to prior bounding approaches that are extremely conservative. Furthermore we show that the resulting optimization is convex, and hence amenable to online control design.


IEEE Transactions on Robotics | 2011

Chance-Constrained Optimal Path Planning With Obstacles

Lars Blackmore; Masahiro Ono; Brian C. Williams

Autonomous vehicles need to plan trajectories to a specified goal that avoid obstacles. For robust execution, we must take into account uncertainty, which arises due to uncertain localization, modeling errors, and disturbances. Prior work handled the case of set-bounded uncertainty. We present here a chance-constrained approach, which uses instead a probabilistic representation of uncertainty. The new approach plans the future probabilistic distribution of the vehicle state so that the probability of failure is below a specified threshold. Failure occurs when the vehicle collides with an obstacle or leaves an operator-specified region. The key idea behind the approach is to use bounds on the probability of collision to show that, for linear-Gaussian systems, we can approximate the nonconvex chance-constrained optimization problem as a disjunctive convex program. This can be solved to global optimality using branch-and-bound techniques. In order to improve computation time, we introduce a customized solution method that returns almost-optimal solutions along with a hard bound on the level of suboptimality. We present an empirical validation with an aircraft obstacle avoidance example.


Automatica | 2011

Brief paper: Lossless convexification of a class of optimal control problems with non-convex control constraints

Behcet Acikmese; Lars Blackmore

We consider a class of finite time horizon optimal control problems for continuous time linear systems with a convex cost, convex state constraints and non-convex control constraints. We propose a convex relaxation of the non-convex control constraints, and prove that the optimal solution of the relaxed problem is also an optimal solution for the original problem, which is referred to as the lossless convexification of the optimal control problem. The lossless convexification enables the use of interior point methods of convex optimization to obtain globally optimal solutions of the original non-convex optimal control problem. The solution approach is demonstrated on a number of planetary soft landing optimal control problems.


IEEE Transactions on Control Systems and Technology | 2013

Lossless Convexification of Nonconvex Control Bound and Pointing Constraints of the Soft Landing Optimal Control Problem

Behcet Acikmese; John M. Carson; Lars Blackmore

Planetary soft landing is one of the benchmark problems of optimal control theory and is gaining renewed interest due to the increased focus on the exploration of planets in the solar system, such as Mars. The soft landing problem with all relevant constraints can be posed as a finite-horizon optimal control problem with state and control constraints. The real-time generation of fuel-optimal paths to a prescribed location on a planets surface is a challenging problem due to the constraints on the fuel, the control inputs, and the states. The main difficulty in solving this constrained problem is the existence of nonconvex constraints on the control input, which are due to a nonzero lower bound on the control input magnitude and a nonconvex constraint on its direction. This paper introduces a convexification of the control constraints that is proven to be lossless; i.e., an optimal solution of the soft landing problem can be obtained via solution of the proposed convex relaxation of the problem. The lossless convexification enables the use of interior point methods of convex optimization to obtain optimal solutions of the original nonconvex optimal control problem.


IEEE Transactions on Automatic Control | 2008

Active Estimation for Jump Markov Linear Systems

Lars Blackmore; Senthooran Rajamanoharan; Brian C. Williams

Jump Markov Linear Systems are convenient models for systems that exhibit both continuous dynamics and discrete mode changes. Estimating the hybrid discrete-continuous state of these systems is important for control and fault detection. Existing solutions for hybrid estimation approximate the belief state by maintaining a subset of the possible discrete mode sequences. This approximation can cause the estimator to lose track of the true mode sequence when the effects of discrete mode changes are subtle. In this paper, we present a method for active hybrid estimation, where control inputs can be designed to discriminate between possible mode sequences. By probing the system for the purposes of estimation, such a sequence of control inputs can greatly reduce the probability of losing the true mode sequence compared to a nominal control sequence. Furthermore, by using a constrained finite horizon optimization formulation, we are able to guarantee that a given control task is achieved, while optimally detecting the hybrid state. In order to achieve this, we present three main contributions. First, we develop a method by which a sequence of control inputs is designed in order to discriminate optimally between a finite number of linear dynamic system models. These control inputs minimize a novel, tractable upper bound on the probability of model selection error. Second, we extend this approach to develop an active estimation method for Jump Markov Linear Systems by relating the probability of model selection error to the probability of losing the true mode sequence. Finally, we make this method tractable using a principled pruning technique. Simulation results show that the new method applied to an aircraft fault detection problem significantly decreases the probability of a hybrid estimator losing the true mode sequence.


Journal of Guidance Control and Dynamics | 2011

Swarm Keeping Strategies for Spacecraft under J2 and Atmospheric Drag Perturbations

Daniel Morgan; Soon-Jo Chung; Lars Blackmore; Behcet Acikmese; David S. Bayard; Fred Y. Hadaegh

This paper presents several new open-loop guidance methods for spacecraft swarms comprised of hundreds to thousands of agents with each spacecraft having modest capabilities. These methods have three main goals: preventing relative drift of the swarm, preventing collisions within the swarm, and minimizing the fuel used throughout the mission. The development of these methods progresses by eliminating drift using the Hill-ClohessyWiltshire equations, removing drift due to nonlinearity, and minimizing the J2 drift. In order to verify these guidance methods, a new dynamic model for the relative motion of spacecraft is developed. These dynamics are exact and include the two main disturbances for spacecraft in Low Earth Orbit (LEO), J2 and atmospheric drag. Using this dynamic model, numerical simulations are provided at each step to show the eectiveness of each method and to see where improvements can be made. The main result is a set of initial conditions for each spacecraft in the swarm which provides hundreds of collision-free orbits in the presence of J2. Finally, a multi-burn strategy is developed in order to provide hundreds of collision free orbits under the inuence of atmospheric drag. This last method works by enforcing the initial conditions multiple times throughout the mission thereby providing collision free motion for the duration of the mission.


american control conference | 2006

Optimal manipulator path planning with obstacles using disjunctive programming

Lars Blackmore; Brian C. Williams

In this paper we present a novel, complete algorithm for manipulator path planning with obstacles. Previous approaches have used incomplete methods to make the problem tractable. By posing the problem as a disjunctive program we are able to use existing constrained optimization methods to generate optimal trajectories. Furthermore, our method plans entirely in the workspace of the manipulator, eliminating the costly process of mapping the obstacles from the workspace into the configuration space

Collaboration


Dive into the Lars Blackmore's collaboration.

Top Co-Authors

Avatar

Brian C. Williams

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Masahiro Ono

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

John M. Carson

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Milan Mandic

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Nanaz Fathpour

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Alberto Elfes

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar

Claire E. Newman

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael T. Wolf

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yoshiaki Kuwata

California Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge