Hans Seywald
Langley Research Center
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hans Seywald.
Journal of Guidance Control and Dynamics | 1994
Hans Seywald
A method for generating finite-dimensional approximations to the solutions of optimal control problems is introduced. By employing a description of the dynamical system in terms of its attainable sets in favor of using differential equations, the controls are completely eliminated from the system model. Besides reducing the dimensionality of the discretized problem compared to state-of-the-art collocation methods, this approach also alleviates the search for initial guesses from where standard gradient search methods are able to converge. The mechanics of the new method are illustrated on a simple double integrator problem. The performance of the new algorithm is demonstrated on a one-dimensional rocket ascent problem (Goddard Problem) in presence of a dynamic pressure constraint.
Journal of Guidance Control and Dynamics | 1995
Hans Seywald; Renjith R. Kumar; S. M. Deshpande
For optimal control problems in Mayer form with all controls appearing only linearly in the equations of motion, this paper presents a method for calculating the optimal solution without user-specified initial guesses and without a priori knowledge of the optimal switching structure. The solution is generated in a sequence of steps involving a genetic algorithm (GA), nonlinear programming, and (multiple) shooting. The centerpiece of this method is a variant of the GA that provides reliable initial guesses for the nonlinear programming method, even for large numbers of parameters. As a numerical example, minimum-time spacecraft reorientation trajectories are generated. The described procedure never failed to correctly determine the optimal solution. INDING the solution to an optimal control problem is a dif- ficult and time-consuming task. By employing Pontryagins minimum principle in conjunction with simple or multiple shoot- ing to solve the resulting boundary-value problem (BVP), this task becomes equivalent to finding the numerical values of the costates (Lagrange multipliers) associated with the physical states of the underlying dynamic system at discrete times. Thus, the problem of solving an optimal control problem can be reduced to solving a non- linear system of equations. Usually, Newton-Raphson methods are well suited for this type of problem. However, due to the sensitiv- ity of the state-costate dynamical system, the task of finding initial guesses that lie within the domain of convergence can become arbi- trarily difficult. In addition, if a control appears only linearly in the equations of motion, the optimal solution is known to consist of a sequence of bang-bang and, possibly, singular subarcs. The switch- ing structure, however, is not known in advance and has to be found by trial and error. The present paper introduces a method for generating the opti- mal control solution for problems in which all controls appear only linearly in the equations of motion. In this method, the user need not provide initial guesses for the state history, the control history, the costate history, or the switching structure. Initial guesses that lie within the domain of convergence of a gradient search method are generated with a genetic algorithm (GA) using substring length 1 for each individual control parameter. A theoretical justification of the approach is given through hodograph analysis and convexity arguments. General convergence arguments pertaining to the GA are mainly heuristic and based on practical experience. Because of the probabilistic nature of GAs, this seems to be unavoidable.
Journal of Guidance Control and Dynamics | 1996
Hans Seywald; Renjith R. Kumar
A method for the automatic calculation of costates using only the results obtained from direct optimization techniques is presented. The approach exploits the relation between the time-varying costates and certain sensitivities of the variational cost function, a relation that also exists between the Lagrangian multipliers obtained from a direct optimization approach and the sensitivities of the associated nonlinear-programming cost function. The complete theory for treating free, control-constrained, interior-point-constrained, and state-constrained optimal control problems is presented. As a numerical example, a state-constrained version of the brachistochrone problem is solved and the results are compared to the optimal solution obtained from Pontryagins minimum principle. The agreement is found to be excellent. Nomenclature / = right-hand side of state equations ge = control equality constraints gi = control inequality constraints he = state equality constraints hi = state inequality constraints J = cost function M = interior-point constraints m = dimension of control vector u N = total number of nodes minus 1 = total number of subintervals n — dimension of state vector x PWC = set of piecewise continuous functions t = time tf = final time ti = nodes along the time axis to = initial time u = control vector x = state vector Xf = final state Xi = state vector at node £/ Xo = initial state \(t) = costate A/ = Lagrangian multiplier associated with differential constraints along subinterval / Hi - Lagrangian multiplier associated with state constraints at node i (Ti = Lagrangian multiplier associated with control constraints along subinterval / <£ = cost function if} j. = boundary conditions at final time •00 = boundary conditions at initial time
Journal of Guidance Control and Dynamics | 1993
Hans Seywald; Eugene M. Cliff
The Goddard problem is that of maximizing the final altitude for a vertically ascending, rocket-powered vehicle under the influence of an inverse square gravitational field and atmospheric drag. The present paper deals with the effects of two additional constraints, namely, a dynamic pressure limit and specified final time. Nine different switching structures involving zero-thrust arcs, full-thrust arcs, singular-thrust arcs, and state-constrained arcs are obtained when the value of the dynamic pressure limit is varied between zero and infinity and the final time is specified between the minimum possible time within which all of the fuel can be burned and the natural final time that emerges for the problem with final time unspecified. For all points in the aforementioned domain of dynamic pressure limit and prescribed final time, the associated optimal switching structure is clearly identified. Finally, a simple intuitive feedback law is presented for the free time problem. For all values of prescribed dynamic pressure limit, this strategy yields a loss in final altitude of less than 3 percent with respect to the associated optimal solution.
Journal of Guidance Control and Dynamics | 1994
Hans Seywald; Eugene M. Cliff; Klaus H. Well
Range optimal trajectories for an aircraft flying in the vertical plane are obtained from P on try agins minimum principle. Control variables are the load factor, which appears nonlinearly in the equations of motion, and the throttle setting, which appears only linearly. Both controls are subject to fixed bounds. Additionally, a dynamic pressure limit is imposed, which represents a first-order state-inequality constraint. For fixed flight time, initial coordinates, and final coordinates of the trajectory, the effect of the load factor limit on the resulting optimal switching structure is studied. All trajectories involve singular control along arcs with active dynamic pressure limit. For large flight times the optimal switching structures have not yet been found.
Journal of Guidance Control and Dynamics | 1994
Hans Seywald; Eugene M. Cliff
In this paper a robust feedback algorithm is presented for a near-minimum-fuel ascent of a generic two-stage launch vehicle operating in the equatorial plane. The development of the algorithm is based on the ideas of neighboring optimal control and can be divided into three phases. In phase 1 the formalism of optimal control is employed to calculate fuel-optimal ascent trajectories for a simple point-mass model. In phase 2 these trajectories are used to numerically calculate gain functions of time for the control(s), for the total flight time, and possibly for other variables of interest. In phase 3 these gains are used to determine feedback expressions for the controls associated with a more realistic model of a launch vehicle. With the advanced launch system in mind, all calculations in this paper are performed on a two-stage vehicle with fixed thrust history, but this restriction is by no means important for the approach taken. Performance and robustness of the algorithm is found to be excellent. cg ycg *TB yxB JCTC
Journal of Guidance Control and Dynamics | 1995
Renjith R. Kumar; Hans Seywald
Direct methods of solving optimal control problems include techniques based on control discretization, where the control function of time is parameterized, and collocation, where both the control and state functions of time are parameterized. A recently introduced direct approach of solving optimal control problems via differential inclusions parameterizes only the state, and constrains the state rates to lie in a feasible hodograph space. In this method, the controls, which are just artifacts used to parameterize the feasible hodograph space, are completely eliminated from the optimization process. Explicit and implicit schemes of control elimination are discussed. Comparison of the differential inclusions method is made to collocation in terms of number of parameters, number of constraints, CPU time required for solution, and ease of calculation of analytical gradients. A minimum time-to-climb problem for an F-15 aircraft is used as an example for comparison. For a special class of optimal control problems with linearly appearing bounded controls, it is observed that the differential inclusion scheme is better in terms of number of parameters and constraints. Increased robustness of the differential inclusion methodology over collocation for the Goddard problem with singular control as part of the optimal solutions is also observed. Background T HE most precise approach to solve optimal control problems is the variational1 approach based on Pontryagins minimum principle.2 This is an indirect approach as it involves solving the necessary conditions of optimality associated with the infinite dimensional optimal control problem rather than optimizing the cost of a finite dimensional discretization of the original problem directly. This method requires advanced analytical skills and to generate numerical solutions of the resulting two-point boundary-value problem is highly nontrivial. The controls are eliminated in the indirect method using the minimum principle. Thus, the optimal control is, in general, a nonlinear function of the state and costate variables. The most important application of the indirect method is the generation of benchmark solutions. Usually, good convergence is achieved only with excellent initial guesses for the nonintuitive costates. Additionally, the switching structure has to be guessed correctly in advance. For rapid trajectory prototyping, the safest approaches are the direct methods.3 These methods rely on a finite dimensional discretization of the optimal control problem to a nonlinear programming problem. Even though these methods do not enjoy the high precision and resolution of indirect methods, their convergence robustness makes them the method of choice of most practical applications. Moreover, these methods do not require the advanced mathematical skills necessary to pose and solve the variational problem.
Journal of Guidance Control and Dynamics | 1995
Renjith R. Kumar; Hans Seywald; Eugene M. Cliff
Closed-loop guidance of a medium-range air-to-air missile (AAM) against a maneuvering target is synthesized using a three-phase, near-optimal guidance scheme. A large period of the closed-loop guidance is performed using neighboring optimal control techniques about open-loop optimal solutions obtained by solving the associated minimum time to intercept trajectories. The first phase is the boost phase guidance where the normal acceleration limit may be active due to high lofting of the boost-sustain-coast AAM. The guidance in this phase accommodates only errors in the missiles state variables, the target maneuvering being neglected. The boost phase guidance involves guidance in the presence of active control constraints. The second phase is the midcourse guidance where both state perturbations and target maneuvers are considered. Comparisons are made between guidance with gain indexing performed with clock time and with performance index to go. Models of aggressive target and run-away targets were used and the guidance scheme performance is excellent. Three methods of optimal gain evaluation are also discussed. Performance augmentation is obtained by using a center of attainability as a pseudotarget that fairs into the actual target as time to go becomes zero. The final phase is the terminal guidance, which employs proportional navigation and its variants.
Journal of Guidance Control and Dynamics | 2010
Haijun Shen; Hans Seywald; Richard W. Powell
This paper aims at reducing the sensitivity of the minimum-fuel powered descent trajectory on Mars in the presence of uncertainties and perturbations, using the desensitized optimal control methodology. The lander is modeled as a point mass in a uniform gravitational field, and the engine throttle is considered the control variable, which is bounded between two nonzero settings. Unlike the conventional practice of designing separately the nominal trajectory and a feedback tracking controller, desensitized optimal control strategy incorporates the two designs in synergy, delivering a superior performance. Sensitivities of the final position and velocity with respect to perturbed states at all times are derived and augmented onto the minimum-fuel performance index through penalty factors. The linear quadratic regulator technique is used to design the feedback control gains. To reduce the likelihood of the closed-loop throttle exceeding the prescribed bounds, a multiplicative factor is applied to the feedback gains. This reshapes the nominal trajectory from the well-known maximum-minimum-maximum structure in that the nominal throttle is encouraged to stay away from the prescribed bounds, leaving room for the feedback control. Monte Carlo simulations show that the occurrence of out-of-bound closed-loop throttles is significantly reduced, leading to improved landing precision.
Journal of Guidance Control and Dynamics | 1994
Renjith R. Kumar; Hans Seywald
Stationkeeping of one spacecraft in low Earth orbit with respect to another, or with respect to a reference point in space, is a common orbit maintenance and guidance requirement. This paper deals with formulating a infinite time fuel-optimal control problem using the Hill equations (also known as the Clohessy-Wiltshire equations) and solving it via a direct approach using concepts of hodograph space and differential inclusions. The differential inclusion based direct method has been selected due to its excellent convergence robustness. Using this methodology, numerous optimal solutions corresponding to various differential drag profiles and Stationkeeping error tolerances were easily obtained from trivial initial guesses. The major contribution of this paper is the interesting observation made regarding the structure of the fuel-optimal solutions as a function of the differential drag profiles and Stationkeeping error tolerances. Results from this study can be used for estimating fuel budgets and developing fuel-optimal Stationkeeping guidance laws.