Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Ralph is active.

Publication


Featured researches published by Daniel Ralph.


Siam Journal on Optimization | 2006

Local Convergence of SQP Methods for Mathematical Programs with Equilibrium Constraints

Roger Fletcher; Sven Leyffer; Daniel Ralph; Stefan Scholtes

Recently, nonlinear programming solvers have been used to solve a range of mathematical programs with equilibrium constraints (MPECs). In particular, sequential quadratic programming (SQP) methods have been very successful. This paper examines the local convergence properties of SQP methods applied to MPECs. SQP is shown to converge superlinearly under reasonable assumptions near a strongly stationary point. A number of examples are presented that show that some of the assumptions are difficult to relax.


Operations Research | 2007

Using EPECs to Model Bilevel Games in Restructured Electricity Markets with Locational Prices

Xinmin Hu; Daniel Ralph

We study a bilevel noncooperative game-theoretic model of restructured electricity markets, with locational marginal prices. Each player in this game faces a bilevel optimization problem that we model as a mathematical program with equilibrium constraints (MPEC). The corresponding game is an example of an equilibrium program with equilibrium constraints (EPEC). We establish sufficient conditions for the existence of pure-strategy Nash equilibria for this class of bilevel games and give some applications. We show by examples the effect of network transmission limits, i.e., congestion, on the existence of equilibria. Then we study, for more general equilibrium programs with equilibrium constraints, the weaker pure-strategy concepts of local Nash and Nash stationary equilibria. We pose the latter as solutions of complementarity problems (CPs) and show their equivalence with the former in some cases. Finally, we present numerical examples of methods that attempt to find local Nash equilibria or Nash stationary points of randomly generated electricity market games.


Siam Journal on Optimization | 1999

Smooth SQP Methods for Mathematical Programs with Nonlinear Complementarity Constraints

Houyuan Jiang; Daniel Ralph

Mathematical programs with nonlinear complementarity constraints are reformulated using better posed but nonsmooth constraints. We introduce a class of functions, parameterized by a real scalar, to approximate these nonsmooth problems by smooth nonlinear programs. This smoothing procedure has the extra benefits that it often improves the prospect of feasibility and stability of the constraints of the associated nonlinear programs and their quadratic approximations. We present two globally convergent algorithms based on sequential quadratic programming (SQP) as applied in exact penalty methods for nonlinear programs. Global convergence of the implicit smooth SQP method depends on existence of a lower-level nondegenerate (strictly complementary) limit point of the iteration sequence. Global convergence of the explicit smooth SQP method depends on a weaker property, i.e., existence of a limit point at which a generalized constraint qualification holds. We also discuss some practical matters relating to computer implementations.


Mathematics of Operations Research | 1994

Global convergence of damped Newton's method for nonsmooth equations via the path search

Daniel Ralph

A natural damping of Newtons method for nonsmooth equations is presented. This damping, via the path search instead of the traditional line search, enlarges the domain of convergence of Newtons method and therefore is said to be globally convergent. Convergence behavior is like that of line search damped Newtons method for smooth equations, including Q -quadratic convergence rates under appropriate conditions.Applications of the path search include damping Robinson-Newtons method for nonsmooth normal equations corresponding to nonlinear complementarity problems and variational inequalities, hence damping both Wilsons method (sequential quadratic programming) for nonlinear programming and Josephy-Newtons method for generalized equations. Computational examples from nonlinear programming are given.


Optimization Methods & Software | 2004

Some properties of regularization and penalization schemes for MPECs

Daniel Ralph; Stephen J. Wright

Some properties of regularized and penalized nonlinear programming formulations of mathematical programs with equilibrium constraints (MPECs) are described. The focus is on the properties of these formulations near a local solution of the MPEC at which strong stationarity and a second-order sufficient condition are satisfied. In the regularized formulations, the complementarity condition is replaced by a constraint involving a positive parameter that can be decreased to zero. In the penalized formulation, the complementarity constraint appears as a penalty term in the objective. The existence and uniqueness of solutions for these formulations are investigated, and estimates are obtained for the distance of these solutions to the MPEC solution under various assumptions.


Mathematics of Operations Research | 1996

Piecewise smoothness, local invertibility, and parametric analysis of normal maps

Jong-Shi Pang; Daniel Ralph

This paper is concerned with properties of the Euclidean projection map onto a convex set defined by finitely many smooth, convex inequalities and affine equalities. Under a constant rank constraint qualification, we show that the projection map is piecewise smooth PC1 hence Bouligand-differentiable, or directionally differentiable; and a relatively simple formula is given for the B-derivative. These properties of the projection map are used to obtain inverse and implicit function theorems for associated normal maps, using a new characterization of invertibility of a PC1 function in terms of its B-derivative. An extension of the implicit function theorem which does not require local uniqueness is also presented. Degree theory plays a major role in the analysis of both the locally unique case and its extension.


Mathematical Programming | 1996

Exact penalization and stationarity conditions of mathematical programs with equilibrium constraints

Zhi-Quan Luo; Jong-Shi Pang; Daniel Ralph; Shiquan Wu

Using the theory of exact penalization for mathematical programs with subanalytic constraints, the theory of error bounds for quadratic inequality systems, and the theory of parametric normal equations, we derive various exact penalty functions for mathematical programs subject to equilibrium constraints, and we also characterize stationary points of these programs.


Mathematical Programming | 1995

Directional derivatives of the solution of a parametric nonlinear program

Daniel Ralph; Stephan Dempe

Consider a parametric nonlinear optimization problem subject to equality and inequality constraints. Conditions under which a locally optimal solution exists and depends in a continuous way on the parameter are well known. We show, under the additional assumption of constant rank of the active constraint gradients, that the optimal solution is actually piecewise smooth, hence B-differentiable. We show, for the first time to our knowledge, a practical application of quadratic programming to calculate the directional derivative in the case when the optimal multipliers are not unique.


IEEE Transactions on Neural Networks | 2002

Effects of moving the center's in an RBF network

Chitra Panchapakesan; Marimuthu Palaniswami; Daniel Ralph; Chris Manzie

In radial basis function (RBF) networks, placement of centers is said to have a significant effect on the performance of the network. Supervised learning of center locations in some applications show that they are superior to the networks whose centers are located using unsupervised methods. But such networks can take the same training time as that of sigmoid networks. The increased time needed for supervised learning offsets the training time of regular RBF networks. One way to overcome this may be to train the network with a set of centers selected by unsupervised methods and then to fine tune the locations of centers. This can be done by first evaluating whether moving the centers would decrease the error and then, depending on the required level of accuracy, changing the center locations. This paper provides new results on bounds for the gradient and Hessian of the error considered first as a function of the independent set of parameters, namely the centers, widths, and weights; and then as a function of centers and widths where the linear weights are now functions of the basis function parameters for networks of fixed size. Moreover, bounds for the Hessian are also provided along a line beginning at the initial set of parameters. Using these bounds, it is possible to estimate how much one can reduce the error by changing the centers. Further to that, a step size can be specified to achieve a guaranteed, amount of reduction in error.


international symposium on neural networks | 1998

Effects of moving the centers in an RBF network

C. Panchapakesan; Daniel Ralph; Marimuthu Palaniswami

In radial basis function networks, placement of centers has been one of the problems addressed and has a significant effect on the performance of the network. Supervised learning of center locations in some applications show that they are superior to the networks whose centers are located using unsupervised methods. Supervised learning of centers seem to offset the advantages achieved by the two stage learning of the RBF networks. One way to overcome this may be to train the network with a set of centers selected by unsupervised methods and then to fine tune the centers. This can be done by evaluating whether moving the centers would decrease the error. In this paper we have calculated bounds for the gradient and Hessian of the error considered as a function of centers for networks of fixed size. Using these bounds it is possible to know by how much one can reduce the error by changing the centers. Furthermore, step size can be specified to achieve a guaranteed amount of reduction in error.

Collaboration


Dive into the Daniel Ralph's collaboration.

Top Co-Authors

Avatar

Jong-Shi Pang

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Zhi-Quan Luo

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stephen J. Wright

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Yves Smeers

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar

Xinmin Hu

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael C. Ferris

University of Wisconsin-Madison

View shared research outputs
Researchain Logo
Decentralizing Knowledge