Contingency Model Predictive Control for Linear Time-Varying Systems
IIEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL.
Contingency Model Predictive Controlfor Linear Time-Varying Systems
John P. Alsterda and J. Christian Gerdes
Abstract —We present Contingency Model Predictive Control(CMPC), a motion planning and control framework that opti-mizes performance objectives while simultaneously maintaining acontingency plan – an alternate trajectory that avoids a potentialhazard. By preserving the existence of a feasible avoidancetrajectory, CMPC anticipates emergency and keeps the controlledsystem in a safe state that is selectively robust to the identifiedhazard. We accomplish this by adding an additional predictionhorizon in parallel to the typical Model Predictive Control(MPC) horizon. This extra horizon is constrained to guaranteesafety from the contingent threat and is coupled to the nominalhorizon at its first command. Thus, the two horizons negotiateto compute commands that are both optimized for performanceand robust to the contingent event. This article presents a linearformulation for CMPC, illustrates its key features on a toyproblem, and then demonstrates its efficacy experimentally on afull-size automated road vehicle that encounters a realistic pop-out obstacle. Contingency MPC approaches potential emergencieswith safe, intuitive, and interpretable behavior that balancesconservatism with incentive for high performance operation.
Index Terms —contingency planning, robust control, modelpredictive control, automated vehicles, collision avoidance
I. I
NTRODUCTION U NCERTAINTY remains a fundamental problem for au-tomated control systems, especially as the technologyexpands into difficult applications such as vehicle automation –building the self-driving car. Uncertainty can take many forms,ranging from measurement accuracy to environmental condi-tions [1]. Each type presents its own features and challenges,and therefore potentially warrants its own solution. This articleproposes a control architecture to tackle a class of uncertaintiescalled contingencies.A contingency is a future event or circumstance that ispossible but cannot be predicted with certainty. An actuatormay fail, a sensor could malfunction, or an obstacle mayemerge. Two factors distinguish contingent events from otheruncertainties. First, their risk of occurrence is situational andrecognizable by context or sensors. Second, their high saliencedemands anticipation and the discrete focus of a dedicatedsafety plan. The control design we propose is tailored to thesefactors.While our formulation is generalized to any linear system,the experiment we perform and examples referenced herein
J.P. Alsterda is with the Department of Mechanical Engineering, StanfordUniversity, Stanford, CA, 94305 USA (email: [email protected]).J.C. Gerdes is with the Department of Mechanical Engineering, StanfordUniversity, Stanford, CA, 94305 USA (email: [email protected]).Manuscript received 23 February 2021; revised DD Month YYYY. Thiswork was supported in part by Ford Motor Company, Dearborn, MI, 48126USA. 𝑥 (cid:2868) 𝑢 (cid:2868)(cid:3030) = 𝑢 (cid:2868)(cid:3041) 𝑥 (cid:2869)(cid:3030) 𝑥 (cid:2869)(cid:3041) 𝑢 (cid:2869)(cid:3030) 𝑥 (cid:2870)(cid:3030) 𝑥 (cid:3015)(cid:3030) 𝑢 (cid:2869)(cid:3041) 𝑥 (cid:2870)(cid:3041) 𝑥 (cid:3015)(cid:3041) …… Fig. 1. Contingency MPC prediction horizons: Nominal horizon (blue)focuses on performance. Contingency horizon (red) must obey the contingentconstraints (red-shaded box). The first input, u , is shared between thehorizons, optimized to meet both nominal and contingent objectives. are from the domain of automated vehicles (AVs), an applica-tion for which contingencies are frequent and diverse. Whenreleased onto public roads, AVs must handle the surprisesthat human drivers tackle routinely. They should, for example,avoid children who may run into the street and maneuversafely when encountering an icy surface [2].Contingency planning has been used in industry, govern-ment, and military applications, involving the preparation andmaintenance of an alternative course of action to meet anunexpected situation [3]. For control engineering, planning anexplicitly alternative trajectory distinguishes this strategy frommany robust systems, which typically create one plan to satisfyall possible outcomes. Robust approaches can be impracticallyconservative when uncertainties are large and numerous, as isthe case driving on legal roads [4].Contingency planning can be appropriate when a hypothet-ical situation is markedly different than nominal conditions,entailing new objectives and constraints. For instance, anobstacle in an AV’s path presents an obvious change to itsconstraints: avoid collision. But objectives may also differ;nominal goals such as passenger comfort, efficiency, andtime-optimality are not relevant in emergency. Rather, highlydynamic maneuvers may be tolerable and required to maintainsafety.Another potential contingency is an abrupt change in theroad surface. When driving in the snow, for example, a vehiclemust be prepared for ice and loss of traction. These twosurfaces present different constraints in the form of physics anddynamics equations, but also justify differing objectives. Onthe snow we seek tight path-tracking performance, but we canease or disregard this goal in emergency. Contingency planningallows constraints and objectives to be chosen individually foreach circumstance. a r X i v : . [ ee ss . S Y ] F e b EEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL.
As our investigation will demonstrate, planning with onlya single trajectory can conflate objectives and produce acontroller that maintains smooth avoidance maneuvers fromunlikely events, at the expense of nominal performance. Con-tingency planning, in contrast, can facilitate rapid response tosudden events with less modification to nominal operation.This article is organized as follows: Section II places ourContingency Model Predictive Control (CMPC) algorithm inthe context of existing techniques to mitigate uncertainty, in-cluding other contingency planners and Model Predictive Con-trol (MPC) algorithms. Section III presents a general CMPCformulation for linear time-varying systems. In Section IV wesimulate CMPC on a toy problem to explore its propertiesin comparison to a simple Robust MPC. Section V developsa CMPC optimization for an automated road vehicle, andSection VI presents the experimental results of that controllernavigating a full-size automated vehicle around a pop-outobstacle, a scenario relevant to leaders in self-driving cardevelopment [5]. Finally, Section VII reflects on the currentcapabilities and limitations of CMPC, and proposes futuresteps for development.II. R
ELATED W ORK
An early application of contingency planning in controlengineering was high-level routing for robotic vehicles. Lin-den and Glickaman modified the A* dynamic programmingalgorithm to choose routes which considered risk of obstruc-tion at bottlenecks [6]. NASA further developed the strategyfor a planetary rover
Contingency Planner/Scheduler , whichplanned for the possibility of low battery, instrument failure, orterrain traversal failure [7]. Meuleau and Smith later presenteda contingency planner to optimize a rover’s daily activities overthe belief space, casting the problem as a Partially ObservableMarkov Decision Process (POMDP) [8].Hardy et al. and Salvado et al. revisited contingency plan-ning, narrowing the problem from routing to path-planning[9][10]. They proposed and simulated optimizations to nav-igate probabilistic obstacles, such as an oncoming vehiclethat may turn across their intended path. By computing state-trajectories with shared initial segments, their planned move-ments were prepared for either outcome.The algorithm we propose advances contingency planningby narrowing the problem further, from path-planning down toclosed-loop control by using MPC, an optimal control strategyfor systems that require advance planning to achieve highperformance and avoid hazards [11]. Whereas a path-plannerreturns a state trajectory only, MPC computes states and theinput commands required to achieve them. MPC has twoadvantages over path planning: First, it integrates planning andcontrol, eliminating the need for a path-following controllerand providing straightforward assurance of dynamic feasibil-ity. Second, it expands the range of contingent events that canbe considered to include model-based emergencies (i.e. lossof friction, actuator failure, or other model-mismatch).We form Contingency MPC by augmenting the typicalMPC structure with a second prediction horizon whose taskis to return a trajectory that safely navigates an identified contingency. As illustrated in Fig. 1, one horizon pursuesnominal performance while the other maintains a feasibleemergency avoidance.Within the field of MPC, there exists a rich and growingcollection of techniques to mitigate uncertainty. Most fall un-der two broad categories: Robust MPC (RMPC) and StochasticMPC (SMPC). In two reveiws, Mayne and Saltik et al. discussseveral strategies, challenges in computational and conceptualcomplexity, and directions for future development [12] [13].In RMPC, uncertain parameters’ are confined to a set, anda conservative control trajectory is optimized to satisfy systemconstraints for all possible combinations of these parameters.Thus RMPC is prepared for the most extreme coincidences ofbad luck, however unlikely. Often cast as a min-max problem,RMPC is in general non-convex, leading to difficulty assuringconsistent real-time operation [14]. To ensure tractability,Sartipizadeh et al. employed an approximate convex hull, andHu et al. optimized offline [15][16].Tube MPC is an RMPC that seeks higher performance bycomputing closed-loop policies rather than open-loop trajec-tories [17]. Tubes are often designed offline, but Lopez et al. recently demonstrated online computation [18]. In the abstract,a perfect policy is a complete contingency plan – ready withan optimal action for every possibility. Existing Tube MPCalgorithms, however, focus more so on mitigating disturbancesand modeling error than contingent events.Stochastic MPC approaches uncertain hazards with lessconservatism than RMPC by optimizing over risk directly [13].In SMPC, uncertain quantities are modeled as statistical distri-butions, and inputs are calculated to minimize expected cost.Bujarbaruah et al. recently formulated an Adaptive StochasticMPC to learn model uncertainty online [19].Scenario SMPC offers a structure that is similar to CMPC, aprediction horizon tree formed by sampling from a distributionat each MPC stage. Krishnamoorthy et al. recently improvedScenario SMPC decomposition [20], and Batkovic et. al combined Tube and Scenario MPC to navigate multi-modalobstacles [21]. As Bloom and Menefee described, scenarioplans are similar to but typically broader than contingencyplans, encompassing a larger range of possibilities [3]. Indeed,Scenario SMPC can prepare for a wide spectrum of possiblefutures but is unlikely to sample the specific and more extremeevents for which contingency planning is designed.A core element of any SMPC is a disturbance model, aprobability distribution or some function to produce randomsamples [22]. Some control applications, however, may notbe suitable to likelihood modeling. For example, an accuratedistribution may not exist for a road’s coefficient of friction,or we may not feel comfortable using probability to predicta child’s movement. Contingency MPC offers an avoidancestrategy that does not depend on a probabilistic model.This article further develops the CMPC algorithm we firstintroduced to navigate uncertain friction conditions, includingexperimental demonstration on an extreme polished-ice sur-face [23]. Dallas et al. recently expanded that formulationto update friction uncertainty online [24], and Ivanovic etal. extended CMPC to navigate roadways with multi-modalpredictions of pedestrians and other agents.
EEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL.
III. C
ONTINGENCY
MPC F
ORMULATION
Classical MPC is a receding horizon optimal control tech-nique that pursues performance and safety by minimizing acost function while subject to explicit constraints such as plantdynamics and state or input boundaries [11]. At each time-step,MPC calculates a prediction horizon – a state trajectory withthe open-loop input commands required to achieve it. Uponcompletion of each iteration, the horizon’s first command u isdeployed, and a new optimization commences. In this article,we consider a fast convex formulation, in which the nextoptimization will converge before u from the previous time-step expires. Therefore, only u from each time-step shouldever be actuated. Future references to deterministic MPC or a nominal prediction horizon also refer to this algorithm.
A. General Contingency MPC
CMPC is a development from classical MPC, in whichthe nominal horizon is augmented by an additional horizon– a contingency plan. Both horizons are optimized together,simultaneously. A contingency plan is an avoidance trajectorywhich mitigates a contingent event, a potential circumstanceidentified by perception or some other sub-system.CMPC’s paired prediction horizons are illustrated in Fig. 1.The nominal and contingency trajectories each stem from x ,the current measured state. x n ∗ is the nominal state trajectory,analogous to a classical MPC horizon. These are the states weintend to drive the system through by applying u n ∗ , the nominalinput trajectory. x c ∗ and u c ∗ are the contingency plan, whichis subject to unique constraints illustrated by the red shadedkeep-out area. These constraints encode the hazard posed by apotential emergency; the nominal horizon does not see them.Critically, the root inputs u n and u c are constrained to beidentical; references to u simply refer to these values. Theequality condition assures that the u ready for deployment isboth optimized for performance and robust to the contingency.When the contingency plan forecasts danger, the two objec-tives must negotiate to find a u agreeable to both horizons.Nominal path-following and performance will be sacrificed toensure contingency states remain safe.With this design, our controller is never required to choosebetween the nominal and contingency plans – the u returnedin each iteration is always viable for both trajectories. As thesystem drives forward in time, there are two possibilities: Mostoften, the contingent event will not occur. As its possibilityfades the contingency constraints recede and the system re-sumes normal operation. Occasionally, the contingent eventwill occur; having anticipated this possibility, the system ispoised to execute an avoidance maneuver.Next, we define the CMPC objective function J as the sumof nominal and contingency costs at each of N stages. Toprovide some notational relief, sub- and super-scripts placedoutside parentheses or brackets in this article apply to allvariables within, such that ( x, u ) nk = ( x nk , u nk ) . J ( x ) = N (cid:88) k =0 j ( x, u ) k = N (cid:88) k =0 j n ( x, u ) nk + j c ( x, u ) ck (1a) Each horizon is subject to its own cost function, j n or j c , tokeep nominal and contingency objectives independent. This isimportant because normal operation and emergency maneuver-ing are fundamentally different. A road vehicle during normaloperation, for example, has objectives such as passengercomfort and efficiency that are less relevant during emergency.We can relieve the contingency horizon of these inappropriatecosts to focus on preserving a safe avoidance maneuver. Byallowing the control engineer to prescribe context-appropriatecosts to each horizon, the cost functions can be designed tooptimize their intended operational domain.The constrained optimization problem statement now fol-lows. The root commands for each horizon must be equal.State transitions follow a dynamics model f , which may bedifferent in the nominal and contingency plans (and potentiallyleads to x n (cid:54) = x c ). Further constraints g may encode state oractuator limits and encode contingency hazards.minimize u n ,u c J ( x ) , subject to: u n = u c = u (1b) x nk +1 = f n ( x, u ) nk ∀ k (1c)and x ck +1 = f c ( x, u ) ck ∀ kg n ( x, u ) n ≤ ∀ g n (1d)and g c ( x , u ) c ≤ ∀ g c B. Linear Contingency MPC
Generally, CMPC can be adapted onto any MPC. But for theremainder of this article, we focus on a convex form knownas a quadratic program (QP), which has a quadratic objectivefunction and linear (affine, precisely) constraints [25]. Convexprogramming allows certain CMPC features to be more easilyproved and demonstrated, and yields speedy computations thatcan run online for real-time experiments. A convex CMPCformulation follows:First we refine the definition of states and inputs as vectors x ∈ R · n and u ∈ R · m , respectively, for a controlled systemwith states and inputs of dimension n and m . x k = (cid:20) x n x c (cid:21) k ; u k = (cid:20) u n u c (cid:21) k (2a)Next we develop a convex objective function J cvx . Wedesire to minimize the expected cost j at each stage: J cvx ( x ) = N (cid:88) k =0 E C (cid:34) j ( x, u ) k (cid:35) (2b) = N (cid:88) k =0 P n · j ( x, u ) nk + P c · j ( x, u ) ck The expectation is expanded over the set of outcomes C with their associated likelihoods. In this article we consideronly two outcomes: the contingency occurs (with probability P c ∈ [0 , ) or it does not ( P n = 1 − P c ). The expansion EEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL. is a now convex combination on j . It’s critical to note thatthe safety of CMPC does not depend on the accuracy ofassigning P c . CMPC remains robust to the contingent eventregardless of P c ’s value; a feasible avoidance trajectory willalways be maintained. Rather, P c is a knob that tunes CMPC’sfocus on nominal performance objectives as it approaches acontingency. Using P c to separate nominal and contingencycosts yields an elegant formulation with intuitive behavior,described further in Section VII.To complete the objective function, we narrow our choiceof j to a weighted 2-norm on x and u . Other convex termsare allowed, such as the 1-norm we add in Section V, but areneglected here for compactness. Q and R are positive semi-definite matrices ∈ R n · n and ∈ R m · m respectively. J cvx ( x ) = N (cid:88) k =0 E C (cid:34) x (cid:62) Q x + u (cid:62) R u (cid:35) k = N (cid:88) k =0 P n · (cid:16) x (cid:62) Q x + u (cid:62) R u (cid:17) nk + P c · (cid:16) x (cid:62) Q x + u (cid:62) R u (cid:17) ck (2c) = N (cid:88) k =0 x (cid:62) k P n Q P c Q x k + u (cid:62) k (cid:34) P n R P c R (cid:35) u k The resulting cost function is quadratic on the stacked stateand input vectors from (2a). With a cost function in hand, theconstrained convex optimization problem statement follows:minimize u J cvx ( x ) , subject to: u n = u c = u (2d) x k +1 = (cid:20) x n x c (cid:21) k +1 = (cid:20) A n A c (cid:21) k x k + (cid:20) B n B c (cid:21) k u k + (cid:20) C n C c (cid:21) k (2e) = A k x k + B k u k + C k (cid:20) G n G c (cid:21) k x k + (cid:20) H n H c (cid:21) k u k ≤ b (2f)As before, u n and u c must be equal. The dynamics model f is now limited to be affine, implemented with block-diagonaldynamics and input matrices. Lastly, the inequality constraintsare limited to affine functions defined here by matrices G and H and offset vector b .To highlight the time-varying capacity of this formulation,note that these matrices may change with each time-step k .To demonstrate, we successively re-linearize the nonlinear AVdynamics in Section V as the system moves among operatingpoints. (cid:3042)(cid:3029)(cid:3046) Fig. 2. Toy problem state-space with the contingent obstacle’s path.
IV. T OY -P ROBLEM S IMULATION
To illustrate CMPC’s properties and behavior, we posea toy problem with minimum complexity and simulate thecontroller’s approach. In contrast to a more conservativeRMPC, CMPC achieves higher performance as measured bythe objective function. We also explore the effect of varying P c , and show how CMPC and RMPC behavior converges as P c tends toward 100%. A. System dynamics and control objectives
The toy problem state-space is the two dimensional envi-ronment shown in Fig. 2, with x and y axes. The controlledsystem is a point-mass that initially resides at the origin: x , y ∈ R ; x = y = 0 (3a)When the simulation commences, it moves in the + x directionat a constant speed of unit / time − step , heading towardthe red-shaded contingency region at x = 10 . An obstaclecurrently lies there at the safe height of y = − , but mayspring upward at any time. Fortunately the mass’s y position iscontrollable; its next value can be chosen freely with commandinput u . This movement is captured by the dynamics model: (cid:20) xy (cid:21) k +1 = (cid:20) xy (cid:21) k + (cid:20) u (cid:21) k (3b)This state-space equation is separable. (3a) and (3b) yield thefollowing real-valued equations for the system state at anygiven time-step k : x k = k and y k = k − (cid:88) i =0 u i (3c)The obstacle has the following properties, all known to thecontroller: It is a hurdle barrier at x = 10 , extending downwardto y = −∞ . Its height begins at y obsk =0 = − , but may springupward to maximum height of y obsmax = +1 , as indicated inFig. 2. If and when the obstacle triggers, it begins to moveupwards at a constant speed of v obs = 0 . units / time − step .Looking forward from a current time-step k , to when the point-mass arrives at x = k = 10 , the obstacle height will be: y obsk =10 = min ( y obsk + ∆ k · v obs , y obsmax )= min ( y obsk + (10 − k ) · . , . (3d) EEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL.
Fig. 3. RMPC toy problem solution at 4 time-steps. Top row: current state (black), horizon states (blue), obstacle’s potential footprint (light red). Bottomrow: horizon input commands (blue). This ‘worst’ case algorithm begins deviating immediately, even though the obstacle never pops out.Fig. 4. CMPC toy problem for P c = 0% . Top row: nominal horizon states (blue) and contingency horizon states (red). Bottom row: nominal horizon inputs(blue) and contingency horizon inputs (red). A contingency plan was maintained, but deviation from y = 0 was not required because the obstacle did not pop. The constraint imposed upon the point-mass’s y position isthen: y k =10 ≥ y obsk =10 (3e)The control objective for the point-mass is to safely navigateacross the x-axis using the minimum control effort. We encodethis with the following cost function: J = min u (cid:88) k u k (3f) B. Robust MPC Solution
To highlight the behavior of CMPC we compare it to abaseline RMPC, which assumes the worst case evolution of thescene and has a single trajectory like a classical deterministicMPC. Its solution follows. We cast RMPC with a predictionhorizon of N = 10 state tuples ( x, y ) and inputs u . Its costfunction takes the form: J = min u N (cid:88) k =0 u k = min u u (cid:62) u (4a)It is constrained by the dynamics in (3b) and obstaclein (3e), and was solved exactly into the Explicit MPC [26]solution presented in the Appendix: u = y obsmax /N (4b) Fig. 3 shows the RMPC solution at 4 time-steps as itapproaches the hurdle, which in this instance never triggers.Assuming the worst case, RMPC begins evasive action imme-diately when the obstacle comes into view of its horizon at k = 1 . As the mass approaches in time-steps k = 5 and k = 8 ,the red-shaded contingency region recedes, indicating reduc-tion of y obsk =10 per equation (3d). In each solution commandsare evenly spread out, amortizing their cost over the horizon. C. Contingency MPC Solution
To formulate a CMPC controller, we first concatenate thestates and inputs: y k = (cid:20) y n y c (cid:21) k ; u k = (cid:20) u n u c (cid:21) k (5a)The horizontal state x evolves identically regardless of u , andis not required to duplicate. The cost function becomes: J = min u N (cid:88) k =0 P n · u nk + P c · u ck = min u N (cid:88) k =0 u (cid:62) k (cid:20) P n P c (cid:21) u k (5b)This quadratic form matches the cost function introducedin equation (8a), with Q = 0 and R = 1 . The constrainedoptimization is then: EEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL.
Fig. 5. CMPC toy problem solution for P c = 25% . The obstacle (red block) begins to pop up at x = 4 . For k ≥ , both horizons see the moving obstacle. min u J ( x , y ) , subject to: u n = u c = u (5c) xy n y c k +1 = xy n y c k + u n u c k (5d) y ck =10 ≥ y obsk =10 (5e)The CMPC problem also lends to an explicit solution, foundin the Appendix with an analytical comparison to RMPC: u = y obsmax P c P c + N − (5f)First, the no-pop scenario is illustrated in Fig. 4, with P c = 0% to draw the greatest contrast to RMPC. At k = 1 ,the obstacle’s path is now visible to the contingency horizon,which charts a path around it. The nominal horizon, however,does not see the potential intrusion and remains at y = 0 .Because P c = 0% , the contingent commands incur no penaltyand CMPC postpones the avoidance to u k> . J = u = 0 forevery time-step; the point-mass need not deviate from y = 0 for an obstacle that does not pop up.Next, we sweep P c from 0% to 100% for the no-popscenario. Fig. 6 shows the closed loop CMPC states, inputs,and costs incurred. At 100%, the behavior is identical toRobust MPC, as expected from equations (4b) and (5f). CMPCapproaches more aggressively approach as P c is reduced.Now the obstacle is allowed to trigger, popping at k = 4 inFig. 5 with P c set to . For k = 1 : 4 , CMPC deviates from y = 0 in anticipation of the potential obstacle. At k = 5 , theobstacle has been moving for one full time-step. The point-mass observes this movement and recognizes the contingencyhas occurred. The control engineer can design CMPC torespond to activated contingencies in several ways, which weenumerate in Section VII. In this example we chose to alertthe nominal horizon to the obstacle’s movement; therefore bothhorizons agree for the remainder the avoidance and the point-mass safely escapes at x = 10 .Fig. 7 provides a lens into CMPC’s objective function. Thecosts in each iteration are shown as a bar-graph, and thecumulative cost incurred by executing u is plotted as a line(penalties on u ∗ k> are not actually incurred by the system). For x = 0 : 4 , avoidance costs are absorbed mostly by thecontingency horizon and are minimally present in u . After x = 4 , the obstacle has been triggered and significant costsare incurred to avoid it. If P c had been set higher, cumulative u cost would be reduced.To conclude this toy problem study, we simulated theencounter a large number of times for the entire range of P c values, triggering the obstacle randomly with probability per time-step. Fig. 8 shows the results, indicating that P c ≈ minimized the expected cost among all possiblescenarios (pop-up commences at k = 1 , , ..., or not at all).When P c is set too low, CMPC approaches too aggressivelyand often requires an abrupt avoidance maneuver. When P c is too high, CMPC acts too conservatively and avoids theobstacle more than necessary. At , CMPC deviates a little Fig. 6. Contingency MPC solutions for a range of P c values. EEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL.
Fig. 7. Optimal costs for the Fig. 5 simulation. Blue & red show CMPC plancosts. The line shows cumulative cost incurred by executing u commands. bit in each approach, regardless of whether the obstacle trig-gered. When a pop-out does occur, CMPC is well positionedto escape without incurring too much u penalty.It’s interesting to note that the optimal value for P c is noteasily derivable, even for this toy problem. This suggests thatin general, assigning an optimal P c may not be trivial for realsystems. However, we reiterate that CMPC maintains a safeavoidance maneuver regardless of P c chosen; P c only tunesthe performance of the system. In fact, CMPC with P c =0% outperforms RMPC in the toy problem until the pop-upprobability exceeds 84%.This toy example serves to illustrate the basic operationof the CMPC framework and demonstrates its performanceadvantage over a simple RMPC. Next, we demonstrate itseffectiveness on an experimental platform.V. L INEAR C ONTINGENCY
MPC
FOR AV S To implement CMPC on a full-size automated road vehicle,we adapt the linear formulation from Section III to the objec-tives and constraints specific to an AV system. The controllerdeveloped here is similar to that which we presented in [23], inwhich Contingency MPC safely navigated an icy corner. Thelinearized dynamics model, its discretization, and optimization
0% 25% 50% 75% 100%
CMPC parameter P c Fig. 8. Expected cost incurred as a function of P c in the toy problem. F x r F y r α r F x f F y f α f δU x U y βr e ∆ ψ s P a t h Fig. 9. Single track planar bicycle model diagram on a curvilinear path. constraints were developed by Brown et al. , and modified tosolve directly for steering angle by Zhang et al. [27][28].
A. Vehicle Dynamics Model
To handle contingency scenarios which push the limitsof vehicle handling, dynamics must be modeled with anappropriate degree of fidelity. We accomplish this using abicycle model with successively linearized tire forces [29].Fig. 9 illustrates the model with three position and velocitystates, and tire forces at each axle. The position states arelocal to a path with curvature κ . s represents the vehicle’slongitudinal progress, e its lateral error from the path, and ∆ ψ is the heading angle error. Velocity states are composedof longitudinal speed U x , lateral speed U y , and yaw rate r . Thefollowing differential equations govern the states’ evolution: ˙ s = U x − U y ∆ ψ (6a) ˙ e = U y + U x ∆ ψ (6b) ˙∆ ψ = r − κ U x (6c) ˙ U x = F xf + F xr m + rU y (6d) ˙ U y = F yf + F yr m − rU x (6e) ˙ r = aF yf − bF yr I z (6f)To linearize these equations for each iteration, the longitudi-nal trajectory is first computed upstream of CMPC by a simplefeedforward-feedback controller, as developed by Funke et al. [30]. Therefore F xf , F xr , U x , and s are known constants toCMPC. With F x commands set, the linear CMPC formulatedin this article may use only the steering angle δ to solve eachoptimization. Including longitudinal forces into CMPC is anopportunity for future development. The MPC state vector isthen x = [ U y r ∆ ψ e ] T , and the command input u = δ . F yf and F yr are modeled by a nonlinear Fiala brush tiremodel that relates lateral tire forces to lumped slip angles α f and α r , which are geometrically computed as follows [31]: δ + α f = tan − (cid:18) U y + arU x (cid:19) (7a) α r = tan − (cid:18) U y − brU x (cid:19) (7b) EEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL. -5 0 5-8-4048
Fig. 10. Fiala brush tire model. Linearizations shown for two possibleoperating points, illustrating how nominal and contingency models may differ.
The Fiala model is illustrated in Fig. 10, accompanied bylinearizations about two operating points. The curve is suc-cessively re-linearized before each CMPC iteration using theprevious solution’s states and inputs as operating points. Eachstage among each horizon uses a unique linearization.The linearized dynamics are then discretized with respect totime into twenty steps: five short 20 ms time-steps followedby fifteen longer 250 ms time-steps. The commanded steeringangle δ is held constant with a zero-order hold during theshort time-steps, and interpolated with a first order hold forthe remainder of the horizon [27]. We chose the number andlength of these time-steps to capture high frequency vehicledynamics in the near-term, and to extend the total horizon farenough to plan appropriately. The CMPC control loop executesin less than one 20 ms time-step to ensure the next commandis ready before the first expires. The resulting dynamics fit theaffine form from equation (2e), with block diagonal matricesto support both nominal and contingency prediction horizons. B. Linear CMPC Problem Statement
The following CMPC optimization is extended from thedeterministic MPC presented in [27] to calculate a smoothtrajectory which follows a desired path while adhering todynamics and environmental constraints:min u (cid:88) k =0 x (cid:62) k P n Q P c Q x k (8a) + v (cid:62) k (cid:34) P n R P c R (cid:35) v k + W σ k Q = ; R = 0 . v k = (cid:20) u nk − u nk − u ck − u ck − (cid:21) ; W = 1000 (8b) State weighting matrix Q penalizes heading error ∆ ψ andlateral error e . Input weight R penalizes the slew rate of thesteering angle δ k − δ k − . W heavily discourages violation ofthe environmental constraints in (8d) by penalizing growthin the slack variable σ . Weight values were tuned for ex-perimental performance, and to prioritize obstacle avoidance,path-tracking, and then smooth operation. This minimizationis subject to the linearized dynamics and to the followinginequality constraints. First, steering angle and slew rate limitsare enforced: | u k | ≤ (cid:20) δ max δ max (cid:21) ; | v k | ≤ (cid:20) v max v max (cid:21) ∀ k (8c)Next, environmental and obstacle boundaries are enforced withslack, such that e min ≤ e ≤ e max : e nmin,k − σ k ≤ e nk ≤ e nmax,k + σ k (8d) e cmin,k − σ k ≤ e ck ≤ e cmax,k + σ k ∀ k Finally, trajectories are coupled via their first commands: u n = u c (8e)VI. E XPERIMENTAL R ESULTS
Using the formulation defined in Section V, we validatedCMPC experimentally in a realistic emergency scenario for anautomated road vehicle. CMPC safely controlled X1, the 2-ton research vehicle shown in Fig. 12, down a narrow lane ofthe two-way street illustrated in Fig. 11, with parallel-parkedcars lining X1’s right-hand side. One parked car, illustrated inred near x = 24 m, pulled into its parking spot very recently.Therefore its occupants are likely to exit the vehicle, and mayopen their car door into X1’s intended path. The potential forX1 to collide with the car door is thus a contingent emergency.X1 is an electrically powered AV research platformequipped with steer-, drive-, and brake-by-wire systems toenable automated experiments. X1 measures its pose, velocity,and acceleration with a Novatel dGPS/INS with RTK. Low-level control is computed on a real-time dSpace MicroAu-toBox executing at 500 Hz. The CMPC optimization wascomputed on an i7 x86 CPU using CVXGEN in a LinuxC++ environment at 50Hz [32]. No visual perception sensorswere used in this experiment; the road geometry and obstacledescription were defined in GPS coordinates and made knownto CMPC a priori . To incorporate CMPC into a hypotheticalcommercial system, it will be necessary to perceive andidentify contingent hazards, a task beyond the scope of thisarticle.Fig. 11 illustrates two snapshots in time as X1 approachesthe recently parked red vehicle at a speed of 12 m/s, about27 mph. Both snapshots are taken before the car door opens;its potential footprint is illustrated in red, extending into X1’slane. The car door is forecast to have a maximum width of1 m and an opening speed of 2 m/s, both known to CMPC andencoded into the contingency horizon’s e cmin . In this test TheCMPC likelihood parameter was set to P c = 25% , indicatingthe contingency has a moderate likelihood. EEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL. time = 0.0 s time = 2.7 s
Fig. 11. CMPC safely navigates around a car door opening into its lane. The contingency horizon (transparent red) maintains the feasibility of an avoidancemaneuver, while the nominal horizon (transparent blue) focuses on the desired path. Closed loop position ( x, y ) and steering angle δ are plotted in green.The parked car’s door begins to open just after 2.7 sec and the avoidance maneuver is deployed. In most real-life occurrences of this scenario, the parkedcar’s occupants will act reasonably, exiting their vehicle afterX1 has passed. However, it’s possible the door could openjust as X1 arrives and create an emergency which requiresanticipatory behavior to avoid. A na¨ıve controller planning toreact if and when the door opens may not be able to escapecollision. CMPC acts robustly to ensure the feasibility ofescape, but balances conservatism with an incentive to followthe nominal path.At top-left, the position and heading states of the CMPCprediction horizons are plotted at t = 0 sec. The contingencyhorizon suggests a smooth avoidance around the door’s po-tential footprint. The nominal states do not see the potentialhazard and are free to pass through. To avoid clutter, onlyseven states are drawn from each horizon. The correspond-ing steering commands are plotted in the lower-left. Thesecommand trajectories share an identical steering angle at u ,where for the time being they agree to adhere to the lane’scenter-line.At top-right X1 has closed the distance to the potentialobstacle considerably, but the car door remains closed fornow. X1’s closed-loop positions and steering angles (green)leading up to this point illustrate an important behavior: Toreduce the likelihood of requiring drastic emergency behavior, Fig. 12. X1 is a modular experimental research platform with automateddriving capabilities at Stanford University.
CMPC has guided X1 off the lane’s center-line and awayfrom the potential hazard by ∼ cm. The magnitude of thisconservative deviation is controlled by P c .The difference between the blue and red horizons at thispoint illustrates significant tension built up in CMPC’s costfunction. The nominal horizon wants to return to the path, butthe contingency horizon ensures a moderately smooth avoid-ance is available. X1’s trajectory leading up to this snapshotshows the history of negotiation between these objectives.In the very next time-step after 2.7 sec, the car door beginsto swing open. At this moment the contingency is no longera possibility, but reality. CMPC’s understanding the of thesituation should evolve when a contingency occurs, and thereare several design choices available to encode this change incircumstance. In this experiment, CMPC is made aware ofthe opening door by including the door’s footprint into thenominal horizon’s constraints (previously, it was only visible Fig. 13. Closed-loop trajectories for CMPC navigating the car door scenariosfrom Fig. 11 with various P c settings. Plotted from the top: Zoomed position,lateral acceleration, steering angle, and steering rate. EEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL. to the contingency horizon). With both horizons now aware ofthe emergency, the tension between plans is resolved and X1steers purposefully around the obstacle.Next, Fig. 13 shows the closed loop behavior of CMPCnavigating the car door emergency with a range of P c values.In each experiment, the parked car’s door triggered to open atthe same time. The P c sweep illustrates the how this parameteraffects CMPC’s approach and reaction to emergency. At ,CMPC does not deviate from the desired path until the cardoor starts to open. This reaction requires the most lateralacceleration and steering angle, and saturates the model’s slewrate limit. As P c is increases, CMPC takes an increasinglyconservative approach, eventually performing a worst-caseRobust MPC avoidance when P c = 100% .VII. D ISCUSSION
The toy simulation and car-door experiments in this articledemonstrate that Contingency MPC can navigate uncertain en-vironments with known potential hazards. The type of hazardpresented here was a pop-out obstacle, but CMPC can alsomanage model mismatch in the form of friction uncertainty[23].Mathematically, the hazards we investigated represent twobroad classes of contingencies that can be encoded intoCMPC. First, the inequality constraints can inscribe obstacleslike the car door. Other hazards encoded this way includecommand limits such as a vehicle’s maximum steering angle orbraking force. Second, the equality constraints can be modifiedto encode model mismatch. Uncertainty over any parameter inthe system dynamics can constitute a contingency.CMPC offers a flexible structure for robust control, withseveral design choices left to the control engineer. Selectinga value for P c has important implications for the system’sperformance, as summarized in Table I. Upper-left and lower-right boxes describe the intended CMPC design. TABLE IContingent eventis common Contingent eventis uncommonLarge P c (cid:88) More anticipatorydeviation; lessdynamic avoidances × More anticipatorydeviation, oftenunnecessarySmall P c × Less anticipatorydeviation; more frequentdynamic avoidances (cid:88)
Less anticipatorydeviation; less frequentdynamic avoidances
If a contingent event is common, its horizon should havegreater cost burden, imparted by a larger P c . This ensuressmooth avoidances are maintained for these frequent events.Conversely, uncommon contingencies should have a smallercost burden, directing CMPC to pursue higher nominal per-formance. When the contingency does occur, a more dynamicavoidance may be used. It is critical to note that CMPCremains robust to the contingency regardless of the valueassigned to P c . The effect of this design is clear in the context of thecar door experiment. If car door pop-outs rarely occur, anavoidance will seldom be deployed. Therefore CMPC shouldfocus on nominal operation and allow the contingency planfreedom to use higher slew rate. Alternatively, if pop-outs aremore common, deployment of the contingency plan will bemore frequent. In that case the planned avoidances should useless slew, but maintaining them will require more compromiseto nominal objectives (e.g. path following or speed tracking).Another design choice considers how to react when con-tingencies occur. In this article, pop-out detection was com-municated to the nominal horizon; hence both plans performthe avoidance. Alternatively, P c could be updated online, orCMPC could be collapsed to a single-horizon MPC whenemergency occurs. Choice among these options depends onthe control application and emergency circumstance.An opportunity for development is to include throttle andbrakes into CMPC. In this article the convex optimizationcould only use steering to mitigate the hazard. In practice,however, the best solution to reduce risk may be to simplyslow down. Longitudinal commands could be incorporated byencoding the full nonlinear vehicle dynamics into CMPC.One caveat for obstacle avoidance should be stated: Ob-stacles that can appear or pop out instantaneously may notbe appropriate for CMPC. In the case of the car door, CMPCleverages knowledge of the obstacle’s maximum pop-out speedto calculate how closely it can pass by an unopened door. If thedoor can open arbitrarily fast, there is no safe buffer distanceexcept to clear the obstacle’s entire potential footprint. CMPCis not necessarily suitable for every potential hazard.Finally, Bloom and Menefee remind us that not all con-tingencies are negative [3]. A positive contingent event mayoffer a slim passing opportunity to a race car, but only if thecontroller is poised with a plan to take advantage.VIII. C ONCLUSION
In this article, Contingency Model Predictive Control isestablished as a credible strategy to augment a deterministiclinear MPC controller with robustness. In systems wherepotential emergencies can be identified, CMPC maintains anavoidance trajectory while pursing performance objectives tothe greatest extent possible. Experimentally, the controller suc-cessfully navigated a real-world obstacle avoidance scenariowith an intuitive approach that achieved higher performancethan a worst-case Robust MPC.Promising avenues for future development of this researchinclude: 1) applying the CMPC framework to new applicationsoutside vehicle automation, and to other AV scenarios suchas pedestrian avoidance or safely following vehicles that areliable to stop (e.g. mail or garbage trucks); 2) integratingCMPC into a real-time emergency recognition system or avisual perception system, capable of identifying contingencies;and 3) considering multiple contingent events simultaneously,which may require multiple contingency horizons.
EEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL. A PPENDIX E XPLICIT S OLUTIONS TO T OY P ROBLEMS
A. Robust MPC Solution
An analytic solution for the Toy Problem RMPC optimiza-tion follows. The potential obstacle is positioned at the end ofthe horizon at k = N . The problem statement is: min u N (cid:88) k =0 u k = min u u (cid:62) u (9a)subject to dynamics: y N = y N − + u N − = (cid:26)(cid:26)(cid:62) y + N − (cid:88) k =0 u k = (cid:62) u (9b)and obstacle constraint: y N = (cid:62) u ≥ y obs (9c)This optimization is solved by Lagrange multipliers: L ( u , λ ) = u (cid:62) u − λ ( (cid:62) u − y obs ) (cid:79) λ L ( u , λ ) = 0 = (cid:62) u − y obs → (cid:62) u = y obs (cid:79) u L ( u , λ ) = (cid:126) u − λ · → u ∗ = λ/ → u = u N − = y obs /N RMPC Solution: u = y obs /N (9d)This analytic solution is employed by the toy problemsimulations in Section IV. When the potential obstacle entersinto the purview of RMPC, it takes immediate action to avoidit, amortizing its inputs u ∗ evenly across the horizon. B. Contingency MPC Solution
We follow a similar strategy to solve CMPC. The optimiza-tion problem statement is: min u N (cid:88) k =0 (cid:20) u n u c (cid:21) Tk (cid:20) P n P c (cid:21) (cid:20) u n u c (cid:21) k = min u U T P U (10a)where U = [ u , u n · · · u nN , u c · · · u cN ] ∈ R N − ,concatenating the entire horizon of nominal and contingencyinputs. P is the following matrix ∈ R (2 N − x (2 N − : . P n · I 0 P c · I Fig. 14. u returned by CMPC, as a function of P c , for a toy problem. Theconcave-down shape is shown in contrast to a linear gain (dashed line). I and are the identity and zero matrix, respectively, ∈ R ( N − · ( N − . P n + P c = 1 . in the top-left position rep-resenting the total weight on u , which replaced u n and u c in(10a). The optimization is subject to the dynamics equations: y nN = u + N − (cid:88) k =1 u nk = n (cid:62) U y cN = u + N − (cid:88) k =1 u ck = c (cid:62) U (10b) n (cid:62) = [ 1 1 ··· ··· and c (cid:62) = [ 1 0 ··· ··· ∈ R N - .The constraints are: u = u n = u c (10c) y cN = c (cid:62) U ≥ y obs (10d)Once again, solution by Lagrange multiplication: L ( U , λ ) = U (cid:62) P U − λ ( c (cid:62) U − y obs ) (cid:79) λ L ( U , λ ) = 0 = c (cid:62) U − y obs → c (cid:62) U = y obs (cid:79) U L ( U , λ ) = (cid:126) U − λ · c → u = λ/ and u ck> = λ P c and u nk> = 0 Substituting for λ , we find the following CMPC solutionfor u as a function of P c : u = u ck> · P c = y obs P c P c + N − y obs /N if P c = 100%= 0 if P c = 0% (10e)As illustrated in Fig. 4, CMPC takes the minimum actionnecessary to maintain safety when P c = 0% . As P c increases,CMPC takes an increasingly conservative approach. u goesup monotonically with the concave-down shape shown inFig. 14 until it fully adopts RMPC behavior at P c = 100% . EEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL. A CKNOWLEDGMENT
The authors would like to thank Renault Group, VW GroupResearch, and VW ERL for experimental support. Alsterdais supported by Ford Motor Company and the U.S. Dept. ofVeterans Affairs G.I. Bill.R
EFERENCES[1] A. J. Ramirez, A. C. Jensen, and B. H. Cheng, “A taxonomy ofuncertainty for dynamically adaptive systems,” in
ICSE Workshop onSoftware Engineering for Adaptive and Self-Managing Systems , 2012,pp. –.[2] Highway Traffic Safety Administration and U.S. National Department ofTransportation, “A Framework for Automated Driving System TestableCases and Scenarios,” Tech. Rep., 2018.[3] M. J. Bloom and M. K. Menefee, “Scenario Planning and ContingencyPlanning,”
Public Productivity & Management Review , vol. 17, no. 3,pp. 223–230, 1994.[4] D. Q. Mayne, “Model predictive control: recent developments and futurepromise,”
Automatica , vol. 50, no. 12, pp. 2967–2986, 2014.[5] Q. Tam, T. Cypher-Plissart, and C. J. Ostafew, “Proactive Risk Mitiga-tion and Reactive Control for Safe and Smooth Automated Driving,” in ,2020.[6] T. A. Linden and J. Glickaman, “Contingency Planning for an Au-tonomous Land Vehicle,” in
Proc. 10th International Joint Conferenceon Artificial Intelligence (IJCAI) , 1987, pp. 1047–1054.[7] R. Washington, K. Golden, J. Bresina, D. E. Smith, C. Anderson, andT. Smith, “Autonomous Rovers for Mars Exploration,” in
Proc. IEEEAerospace Conference , 1999, pp. 237–251.[8] N. Meuleau and D. E. Smith, “Optimal Limited Contingency Planning,”in
Conference on Uncertainty in Artificial Intelligence , 2003, pp. 417–426.[9] J. Hardy and M. Campbell, “Contingency planning over probabilisticobstacle predictions for autonomous road vehicles,”
IEEE Transactionson Robotics , vol. 29, no. 4, pp. 913–929, 2013.[10] J. Salvado, L. M. Custodio, and D. Hess, “Contingency planning forautomated vehicles,” in
Proc. International Conference on IntelligentRobots and Systems (IROS) . IEEE/RSJ, 2016.[11] J. Richalet, A. Rault, J. L. Testud, and J. Papon, “Model predic-tive heuristic control: applications to industrial processes,”
Automatica ,vol. 14, no. 5, pp. 413–428, 1978.[12] D. Q. Mayne, “Robust and stochastic model predictive control: are wegoing in the right direction?”
Annual Reviews in Control , vol. 41, pp.184–192, 2016.[13] M. B. Saltik, L. ¨Ozkan, J. H. Ludlage, S. Weiland, and P. M. Van denHof, “An outlook on robust model predictive control algorithms: reflec-tions on performance and computational aspects,”
Journal of ProcessControl , vol. 61, pp. 77–102, 2018.[14] P. J. Campo and M. Morari, “Robust model predictive control,” in
Proc.American Control Conference , 1987.[15] H. Sartipizadeh and T. L. Vincent, “A new robust MPC using anapproximate convex hull,”
Automatica , vol. 92, pp. 115–122, 2018.[16] J. Hu and B. Ding, “An efficient offline implementation for output feed-back min-max MPC,”
International Journal of Robust and NonlinearControl , vol. 29, no. 2, pp. 492–506, 1 2019.[17] F. Blanchini, “Control synthesis for discrete time systems with controland state bounds in the presence of disturbances,”
Journal of Optimiza-tion Theory and Applications , vol. 65, no. 1, p. 29–40, 1990.[18] B. T. Lopez, J. P. How, and J. E. Slotine, “Dynamic tube MPC fornonlinear systems,” in
Proc. American Control Conference , vol. 2019-July, 2019, pp. 1655–1662.[19] M. Bujarbaruah, X. Zhang, M. Tanaskovic, and F. Borrelli, “AdaptiveStochastic MPC under Time Varying Uncertainty,”
IEEE Transactionson Automatic Control , vol. early access, pp. 1–6, 2020.[20] D. Krishnamoorthy, E. Suwartadi, B. Foss, S. Skogestad, and J. Jaschke,“Improving scenario decomposition for multistage MPC using asensitivity-based path-following algorithm,”
IEEE Control Systems Let-ters , vol. 2, no. 4, pp. 581 – 586, 2018.[21] I. Batkovic, U. Rosolia, M. Zanon, and P. Falcone, “A Robust ScenarioMPC Approach for Uncertain Multi-Modal Obstacles,”
IEEE ControlSystems Letters , vol. 5, no. 3, pp. 947–952, 7 2021.[22] M. Farina, L. Giulioni, and R. Scattolini, “Stochastic linear modelpredictive control with chance constraints - a review,”
Journal of ProcessControl , vol. 44, pp. 53–67, 2016. [23] J. P. Alsterda, M. Brown, and J. C. Gerdes, “Contingency ModelPredictive Control for Automated Vehicles,” in
Proc. American ControlConference , 2019.[24] J. Dallas, J. Wurts, J. L. Stein, and T. Ersal, “Contingent NonlinearModel Predictive Control for Collision Imminent Steering in UncertainEnvironments,” in
Proc. IFAC World Congress , 2020.[25] S. Boyd and L. Vandenberghe,
Convex Optimization . CambridgeUniversity Press, 2004.[26] A. Bemporad, M. Morari, V. Dua, and E. N. Pistikopoulos, “The explicitlinear quadratic regulator for constrained systems,”
Automatica , vol. 38,pp. 3–20, 2002.[27] M. Brown, J. Funke, S. Erlien, and J. C. Gerdes, “Safe driving envelopesfor path tracking in autonomous vehicles,”
Control Engineering Practice ,vol. 61, pp. 307–316, 2017.[28] V. Zhang, S. M. Thornton, and J. C. Gerdes, “Tire modeling to enablemodel predictive control of automated vehicles from standstill to thelimits of handling,” in
Proc. 14th International Symposium on AdvancedVehicle Control , 2018.[29] S. M. Erlien, J. Funke, and J. C. Gerdes, “Incorporating non-linear tiredynamics into a convex approach to shared steering control,” in
Proc.American Control Conference , 2014.[30] J. Funke, M. Brown, S. M. Erlien, and J. C. Gerdes, “Collision avoidanceand stabilization for autonomous vehicles in emergency scenarios,”
IEEETransactions on Control Systems Technology , vol. 25, no. 4, pp. 1204 –1216, 2017.[31] H. Pacejka,
Tire and Vehicle Dynamics , 3rd ed. Butterworth-Heinemann, 2012.[32] J. Mattingley and S. Boyd, “CVXGEN: A code generator for embeddedconvex optimization,”
Optimization and Engineering , vol. 13, no. 1, pp.1–27, 3 2012.
John P. Alsterda received the M.S. degree inmechanical engineering from Stanford University,Stanford, CA, USA, in 2018, and the B.S. degree inPhysics from University of Illinois, Urbana Cham-paign, IL, USA, in 2011. He is currently pursuingth Ph.D. degree with Stanford University, Stanford,CA, USA. His current research interests include pathplanning and control for automated systems underuncertainty. He is a Lt.Cdr. in the United StatesNaval Reserve at the Office of Naval Research.