Convergence Analysis of Fixed Point Chance Constrained Optimal Power Flow Problems
AARGONNE NATIONAL LABORATORY9700 South Cass AvenueArgonne, Illinois 60439
Convergence Analysis of Fixed Point Chance Constrained Optimal Power Flow Problems
J. J. Brust, M. Anitescu
Mathematics and Computer Science DivisionPreprint ANL/MCS-P9431-0121January 2021 This work was supported by the U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research, under Contract DE-AC02-06CH11357 at Argonne National Laboratory. a r X i v : . [ m a t h . O C ] J a n he submitted manuscript has been created by UChicago Argonne,LLC, Operator of Argonne National Laboratory (“Argonne”).Argonne, a U.S. Department of Energy Office of Science laboratory,is operated under Contract No. DE-AC02-06CH11357. The U.S.Government retains for itself, and others acting on its behalf,a paid-up nonexclusive, irrevocable worldwide license in saidarticle to reproduce, prepare derivative works, distribute copiesto the public, and perform publicly and display publicly, by oron behalf of the Government. The Department of Energy willprovide public access to these results of federally sponsoredresearch in accordance with the DOE Public Access Plan. http://energy.gov/downloads/doe-public-accessplan Convergence Analysis of Fixed Point ChanceConstrained Optimal Power Flow Problems
Johannes J. Brust, and Mihai Anitescu,
Member , IEEE
Abstract —For optimal power flow problems with chance con-straints, a particularly effective method is based on a fixedpoint iteration applied to a sequence of deterministic powerflow problems. However, a priori, the convergence of such anapproach is not necessarily guaranteed. This article analyses theconvergence conditions for this fixed point approach, and reportsnumerical experiments including for large IEEE networks.
Index Terms —Fixed point method, Chance Constraints,Stochastic Optimizations, AC Optimal Power Flow
I. I
NTRODUCTION C HANCE constrained optimization problems are typicallycomputationally very challenging. However, when mod-eling the effects of uncertainty on optimal power networks, po-tentially large chance constrained optimization problems arise[1]–[5]. Commonly, stochastic optimal power flow models aredeveloped by reformulating a widely accepted deterministicmodel. One such “classical” model is the AC optimal powerflow model (AC-OPF) [6]. Because AC-OPF has multiple de-grees of freedom with respect to the problem variables, variousstochastic power flow paradigms can be developed from it.In particular, one can define different subsets of variables asstochastic variables. For instance, in [7] power generation isregarded as being stochastic (with constant demands), resultingin probabilistic objective functions. This article develops achance constrained AC optimal power flow model (CC-AC-OPF) in which the objective function is deterministic, and thedemands can be stochastic. This enables direct interpretation ofthe meaning of the optimal objective function values, and canbe more realistic, given that in practice demand is far more asource of uncertainty as compared to supply. Yet, independentof how the stochastic power flow model is formulated ittypically yields a chance constrained optimization problem.In order to also solve large instances of the resulting CC-AC-OPF problems, a fixed point iteration on a sequence ofmodified, related, and simpler deterministic AC-OPF prob-lems is solved. Iterative methods that solve a sequence ofsimpler problems have been very effective on a variety ofrecent power systems problems [8]–[10]. In the context ofchance-constrained optimization, such an approach has beensuccessfully used in [11], [12] among more. However, prior tothis work, there did not exist analytical criteria for when onecan expect this fixed point iteration to converge. Therefore,this article describes a convergence analysis of the chance
J.J. Brust and M. Anitescu are with Argonne National Labora-tory, Mathematics and Computer Science Division, Lemont, IL, e-mails:[email protected],[email protected] work was supported by the U.S. Department of Energy, Office ofScience, Advanced Scientific Computing Research, under Contract DE-AC02-06CH11357 at Argonne National Laboratory.” constrained fixed point iteration and tests the method on a setof standard IEEE networks. The numerical experiments reportresults for networks with up to 9000 nodes. To summarize, thecontributions of this work are: (1) The formulation of a newCC-AC-OPF model with a deterministic objective function,derived from uncertain demands; (2) The application andanalysis of a fixed-point algorithm for an implicit chanceconstrained problem. Even though the use of iterative (fixed-point) techniques is widespread in power systems problems,previously no rigorous analysis had been undertaken for thisformulation. In particular, we disentangle effects of modelparameters and network properties on the convergence; (3)We include numerical experiments on large test cases.
A. AC Power Flow
The power network is defined by N nodes (buses) and N L edges (lines). Associated to each bus is a voltage magnitude, v i , frequency θ i and a quantity that represents power gen-eration or consumption at the i th bus, for ≤ i ≤ N . Inparticular, let p gi be the real power generated at bus i , q gi the corresponding reactive power and p di , q di the real andreactive demands, respectively. The buses are furthermoredivided into two groups: generators and loads. The indexingsets are G , and L , which are related to the set of all busesby B = G ∪ L and N = N G + N L . Throughout, we willuse the indexing sets { G, L } as subscripts to bold font vectorvariables. Note that in our setup each bus fits exactly into oneof the two categories (though one can easily set a virtual zeroload at a node). It is customary to assume that the networkcontains one reference bus (typically at i = 1 ). Load busesdo not have active power generation and are thus defined by p gL = q gL = , where p g , q g are vectors that contain the p gi ’s and q gi ’s. The power flow in the system is describedby the power flow equations, which couple the variables(e.g., [6, Sections III-IV]). Specifically, let G ik , B ik denotethe entries of the network’s admittance matrix and define thequantities θ ik ≡ θ i − θ k , c ik ≡ G ik cos( θ ik ) + B ik sin( θ ik ) and d ik ≡ G ik sin( θ ik ) − B ik cos( θ ik ) . With these definitions,the N power flow equations, for i = 1 , . . . , N , are v i N (cid:88) k =1 v k c ik − ( p gi − p di ) = 0 , (1) v i N (cid:88) k =1 v k d ik − ( q gi − q di ) = 0 , when grouped by busses. If we let d = vec ( p d , q d ) be the R N vector containing p di , q di then in vector notation (1) isexpressed as f ( v , θ , p g , q g ; d ) = , where f represents theonlinear equations in (1). Note, however, that f is linear in d ,a fact we will use later. Moreover, d is regarded as a parameterand not a variable. B. AC Optimal Power Flow
Optimal power flow determines the best set of variablesthat satisfy the network constraints. In addition to the powerflow equations from (1), branch current bounds are typicallyincluded in the optimization problem. In particular, let LC be the set of all line connections, i.e., the set of index pairsthat describe all line connections. For instance, if bus i isconnected to bus k , then ( i, k ) ∈ LC . Subsequently, the so-called branch current constraints are ( D max ik ) − ( v i cos( θ i ) − v k cos( θ k )) − ( v i sin( θ i ) − v k sin( θ k )) ≥ , ∀ ( i, k ) ∈ LC for constant current limits D max ik . In vector notation these N L constraints are represented as g ( v , θ ) ≥ . It is also desirableto include “hard” bounds on the generation variables, suchas l p ≤ p gG ≤ u p , where l p , u p represent constant lowerand upper bounds. The AC optimal power flow (AC-OPF)problem, for a cost function C , is thus formulated asminimize v , θ , p g , q g C ( p gG ) subject to (2) f ( v , θ , p g , q g ; d ) = (3) g ( v , θ ) ≥ p ≤ p gG ≤ u p l q ≤ q gG ≤ u q l v ≤ v ≤ u v l θ ≤ θ ≤ u θ The cost function is typically a convex quadratic functionthat only depends on the real power generation. Specifically, C ( p gG ) = (cid:80) i ∈ G q ii ( p gi ) + (cid:80) i ∈ G q i p gi + q for cost data q ii , q i and q . The solution s ∗ = vec ( v ∗ , θ ∗ , ( p g ) ∗ , ( q g ) ∗ ) of(2) is called the optimal power flow point, or OPF point. C. Chance Constrained AC Optimal Power Flow
In order to introduce uncertainty related to e.g., renewableenergy into the OPF problem (2) we regard the demand termsin (1) (i.e., p di , q di ) as forecasted values with possible error.Specifically, these stochastic quantities are represented as p di + ω i , q di + ω N + i , (4)where ω i and ω N + i represent forecasting errors. For compact-ness, the stochastic errors are represented by the N vector ω . Here we assume ω is normal, and relaxing this assumptionyields different approaches. The chance constrained AC-OPFmodel in this Section is developed such that the objectivefunction only depends on deterministic variables. When theobjective function represents cost, deterministic values aremeaningful and important. Since the power flow equationsin (1) are overdetermined, i.e., f : R N + N G → R N ,this system has N G degrees of freedom. Subsequently, let y = p G represent a R N G vector of deterministic variablesand x = x ( ω ) = vec ( q G ( ω ) , v L ( ω ) , θ ( ω )) a N vector of stochastic variables. The stochastic power flow equations arerepresented as f ( x , y ; d ) + ω = f ω ( x ( ω ) , y ; d + ω ) ≡ f ω = . (5)If ω = these equations reduce to the power flow equations in(1). Note that the power flow equations couple the variablesand that the uncertainty in x depends on the uncertainty inthe demands “ d ”, since x = x ( ω ) = x ( y , d + ω ) . Weset C ( p gG ) = C ( y ) to reflect the change in variables forthe stochastic optimal power flow problem and note that theobjective function is deterministic. Let P ( z ≥ ) ≥ − (cid:15) represent a vector of inequalities with non-negative probabilityconstraints, for which each element in the left hand sidecorresponds to a cumulative probability (for z i ≥ ) and eachelement of (cid:15) is in the interval < (cid:15) < . Then the chanceconstrained (stochastic) optimal power flow problem is givenby minimize x ( ω ) , y C ( y ) subject to (6) f ω ( x ( ω ) , y ; d + ω ) = ∀ ω P ( g ( x ( ω ) , y ) ≥ ) ≥ − (cid:15) g (7) P ( l x ≤ x ( ω ) ≤ u x ) ≥ − (cid:15) x (8) l y ≤ y ≤ u y . Observe that the problem in (6) includes chance constraints(probability constraints) on the line flow limits (7) and thestochastic variables x ( ω ) (8), while deterministic limits areset on y . Here (cid:15) g and (cid:15) x correspond to model parameters forsetting probability thresholds.
1) Computing Chance Constraints:
To practically computethe chance constraints in problem (6), the stochastic variablesare linearized around the error ω (zero mean), similar to [11].This is equivalent to assuming that ω is sufficiently small,which we proceed to do in the rest of the paper. In particular, x ( y , d + ω ) ≈ x ( y , d ) + ∂ x ( y , d + ω ) ∂ ω (cid:12)(cid:12)(cid:12)(cid:12) ω =0 ω ≡ x ω . (9)Note that x ( y , d ) = x and that ∂ x ( y , d ) (cid:14) ∂ ω aredeterministic. Thus the expectation and variance of x ω are E [ x ω ] = x ( y , d ) = x , and Var [ x ω ] =( ∂ x ( y , d ) (cid:14) ∂ ω ) Var [ ω ]( ∂ x ( y , d ) (cid:14) ∂ ω ) (cid:62) , respectively. Becausecomputing the full nonlinear probability constraints is typi-cally very computationally expensive, the probabilities of thelinearized random variables are used instead. For instance, theconstraints from (8) are written as P ( x ω ≤ u x ) ≥ − (cid:15) x , P ( l x ≤ x ω ) ≥ − (cid:15) x . (10)To handle the vector of probabilities in (10) one can usea Bonferroni bound [13], in which each individual variable ( x ω ) r for ≤ r ≤ N satisfies an individual highly conser-vative probability constraint. However, when the variables areindependent (or can be treated as such) then the probabilitiescan be separated without restrictions. The mean of ( x ω ) r is ( x ) r , while the variance can also be explicitly computed.Define ∂ x (cid:14) ∂ ω ≡ Γ , (11)2nd let e r be the r th column of the identity matrix. More-over, denote Var [ ω ] = Σ . With this the variance of ( x ω ) r is (cid:13)(cid:13) e (cid:62) r ΓΣ (cid:13)(cid:13) . In turn, when the variables can betreated independently, individual probability constraints, suchas F Nrm (( x ) r ≤ ( u x ) r ) ≥ − ( (cid:15) x ) r (where F Nrm is thenormal distribution function) can be represented as (( u x ) r − ( x ) r ) (cid:14) (cid:13)(cid:13) e (cid:62) r ΓΣ (cid:13)(cid:13) ≥ ( F Nrm ) − (1 − ( (cid:15) x ) r ) , where ( F Nrm ) − ( · ) is the inverse cumulative distribution func-tion. Defining z r = ( F Nrm ) − (1 − ( (cid:15) x ) r ) the constraints arerepresented as ( u x ) r − λ r ≥ ( x ) r , λ r = z r (cid:107) e (cid:62) r ΓΣ (cid:107) . (12)Similarly, for N + 1 ≤ r ≤ N + N L and ¯ r = r − N ,defining z r = ( F Nrm ) − (1 − ( (cid:15) g ) r ) the remaining probabilityconstraints are represented as g ( x , y ) ¯ r − λ r ≥ with λ r = z r (cid:107) e (cid:62) ¯ r ( ∂ g (cid:14) ∂ x ) ΓΣ (cid:107) . (13)Note that λ r , λ r depend on x , y and d , e.g., λ r = λ r ( x ( y , d )) , which we will use later. Moreover the inequalityin (12) is deterministic and thus straightforward to computeonce Γ is known. Second, when ( (cid:15) x ) r is a small number(which is typically the case) then z r > and λ r > . Thusthe λ r , λ r values are regarded as constraint tightenings andrepresent the effects of stochasticity in the constraints.
2) Computing ∂ x (cid:14) ∂ ω : The partial derivatives ∂ x (cid:14) ∂ ω = Γ in (12) are obtained by using the power flow equation ∂ ∂ω ( f ω ( x ( ω ) , y ; d + ω )) = ∂ f ω ∂ x ∂ x ∂ ω + ∂ f ω ∂ ω = . First, note from (5) that ∂ f ω (cid:14) ∂ ω = I . Second, the partialderivatives are only needed at ω = , which yields therepresentation ∂ x (cid:14) ∂ ω = − ( ∂ f (cid:14) ∂ x ) − with the convention f = f . Since x = vec ( q gG , v L , θ ) the so-called Jacobianmatrix of partial derivatives is given by ∂ f ∂ x = (cid:104) ∂ f ∂ q gG ∂ f ∂ v L ∂ f ∂ θ (cid:105) . (14)The elements of this matrix can be computed from (1).Note that only the last N equations in (1) depend on q g .Subsequently, we define the indices ≤ i ≤ N and j = N + i ,as well as ≤ g ≤ N G , ≤ l ≤ N L and ≤ t ≤ N . Withthis, the elements of the Jacobian from (14) are: (cid:18) ∂ f ∂ q gG (cid:19) i,g = 0 , (15) (cid:18) ∂ f ∂ q gG (cid:19) j,g = (cid:40) − if i = [ G ] g otherwise (cid:18) ∂ f ∂ v L (cid:19) i,l = (cid:40)(cid:80) Nk =1 v k c ik + 2 c ii v i if i = [ L ] l v i c i [ L ] l otherwise (cid:18) ∂ f ∂ v L (cid:19) j,l = (cid:40)(cid:80) Nk =1 v k d ik + 2 d ii v i if i = [ L ] l v i d i [ L ] l otherwise (cid:18) ∂ f ∂ θ (cid:19) i,t = (cid:40) − v i (cid:80) Nk =1 v k d ik if i = tv i v t d it otherwise (cid:18) ∂ f ∂ θ (cid:19) j,t = (cid:40) v i (cid:80) Nk =1 v k c ik if i = t − v i v t c it otherwise The partial derivatives ∂ x /∂ ω = ( ∂ f /∂ x ) − are defined byan inverse. This inverse is typically well defined for regularoptimal power flow problems, as described in [7, Section III.B] and [14]. Therefore, the smallest singular value of ∂ f /∂ x ≡ J is positive, i.e., σ min ( J ) > . A positive lower bound forthe smallest singular value of a matrix is described in [15,Theorem 1]. Let J : ,i denote the i th column of J and let J i, : bethe i th row of J . Then a lower bound for the smallest singularvalue is: σ min ( J ) ≥ ˆ K Γ > , where ˆ K Γ = (cid:18) ˆ n − n (cid:19) ˆ n − | J | max (cid:18) min ( (cid:107) J : ,i (cid:107) ) (cid:81) i (cid:107) J : ,i (cid:107) , min ( (cid:107) J i, : (cid:107) ) (cid:81) i (cid:107) J i, : (cid:107) (cid:19) , ˆ n ≡ N , and where the determinant is | J | = det ( J ) . Thismeans that (cid:107) ( ∂ f (cid:14) ∂ x ) − (cid:107) = (cid:107) J − (cid:107) = 1 σ min ( J ) ≤ (cid:14) ˆ K Γ ≡ K Γ (16)where ˆ K Γ , K Γ are finite constants. In order to compute solveswith J (which is part of computing the constraints in (12)and (13)) the LU factorization is based on the decomposition J = PLU , where P is a permutation matrix, L is a unit lowertriangular matrix and U is an upper triangular matrix. Thedeterminant and the bounds in (16) are thus available “withoutextra expense” based on solves with J , by multiplying thediagonal elements of U , since | J | = | U | and U is uppertriangular. D. Implicit Chance Constrained Optimal Power Flow
An optimal power flow problem that combines componentsof the AC-OPF problem in (2) and of the chance constrainedproblem in (6) is obtained by using the constraints from (12)and (13). This reformulated problem incorporates stochasticeffects, while at the same time enables efficient computations.The corresponding implicit chance-constrained AC-OPF prob-lem is: minimize v , θ , p g , q g C ( p gG ) subject to (17) f ( v , θ , p g , q g ; d ) = ( v , θ ) ≥ λ g ( p gG ) l q + λ q ( p gG ) ≤ q gG ≤ u q − λ q ( p gG ) l v + λ v ( p gG ) ≤ v L ≤ u v − λ v ( p gG ) l θ + λ θ ( p gG ) ≤ θ ≤ u θ − λ θ ( p gG ) l p ≤ p gG ≤ u p , where λ q ( p gG ) , λ v ( p gG ) , λ θ ( p gG ) are computed using (12) and λ g ( p gG ) is computed using (13). The problem can be seen tobe implicit with regards to probability constraints, because theeffects of uncertainty are implicitly included in the tighteningsof some constraints by the non-negative λ ’s.3I. M ETHOD
The solution of the potentially large nonlinear optimiza-tion problem in (17) can be computed directly. However,the computation of the λ ’s adds nonlinearities, because theydepend on the matrix Γ = − ( ∂ x /∂ ω ) − ∈ R N × N , whichis an inverse. Because of this, the problem in (17) is stillcomputationally difficult. Instead of solving (17) directly, wedescribe an iterative scheme, which computes an approximatesolution of (17), by solving a sequence of simpler problems(see also [11]). The algorithm is stated as: Algorithm 1 Inputs: s (0) = vec ( v (0) , θ (0) , p (0) , q (0) ) , < τ q , < τ v , < τ θ , < τ g < (cid:15) q ≤ . , < (cid:15) v ≤ . , < (cid:15) θ ≤ . , < (cid:15) g ≤ . , ≺ Σ , λ (0) q = λ (0) v = λ (0) θ = λ (0) g = 0 , k = 0 ; for k = 0 , , . . . Solve (17) with λ ( k ) q , λ ( k ) v , λ ( k ) θ fixed and obtain s ( k +1) ; Compute λ ( k +1) q , λ ( k +1) v , λ ( k +1) θ , λ ( k +1) g using (12)and (13); if (cid:107) λ ( k +1) q − λ ( k ) q (cid:107) ∞ ≤ τ q and (cid:107) λ ( k +1) v − λ ( k ) v (cid:107) ∞ ≤ τ v and (cid:107) λ ( k +1) θ − λ ( k ) θ (cid:107) ∞ ≤ τ θ and (cid:107) λ ( k +1) g − λ ( k ) g (cid:107) ∞ ≤ τ g then Stop. Return s ( k +1) ; else k = k + 1 ; end if Note that Algorithm 1 stops when the changes in the λ ’sbecome small. However no criteria have yet been specified forwhen one can expect the iteration to converge. Therefore, weanalyze conditions for which one can expect Algorithm 1 toconverge. III. A NALYSIS
The analysis of Algorithm 1 is based on the observation thatthis iterative algorithm is a fixed point iteration. Therefore, inorder to derive its convergence conditions, we investigate theconvergence conditions of the fixed point iteration. Throughoutthis section, the problem from (17) is reformulated asminimize s C ( s ) subject to (18) f ( s ) = , h ( s ; λ ) ≥ with s = vec ( v , θ , p g , q g ) , C ( s ) = C ( p gG ) , λ = λ ( s ) = vec ( λ q , λ v , λ θ , λ g ) , h ( s ; λ ) ≥ ∈ R m ( m = N L + 4 N inequality constraints in (17) parametrized by λ ) and f ( s ; d ) = f ( s ) . Our analysis is consistent with classical nonlinear pro-gramming sensitivity results [16], [17], however is differentbecause of the fixed point component of the algorithm. Inclassical nonlinear programming λ is taken as an independentperturbation parameter, with focus on the sensitivities of s with respect to λ . However, because the (implicit) chance-constrained problem also contains the dependence λ = λ ( s ) we are led to also acknowledge the interplay of λ and s . Recallthat a fixed point, say λ ∗ , of the function, say h ( λ ) , is definedby the two conditions h ( λ ∗ ) = λ ∗ Condition 1 , (cid:107) ∂h ( λ ∗ ) (cid:14) ∂ λ (cid:107) ∈ [0 , Condition 2 , (19)cf. [18]. To investigate the relation between the variables inAlgorithm 1, define the mapping in line 3 that determines s ( k +1) from λ ( k ) by s M ( λ ( k ) ) (i.e., s ( k +1) = s M ( λ ( k ) ) ), andthe operation in line 4 that determines λ ( k +1) from s ( k +1) by λ M ( s ( k +1) ) (i.e., λ ( k +1) = λ M ( s ( k +1) ) ). Then the statementsin the algorithm recursively define the next iterates, startingfrom k = 0 , s ( k ) , λ ( k ) , so that, s ( k +1) = s M ( λ ( k ) ) , λ ( k +1) = λ M ( s ( k +1) )= s M ( λ M ( s ( k ) )) , = λ M ( s M ( λ ( k ) )) . This means that there is a mapping that generates s ( k +1) from s ( k ) and one that generates λ ( k +1) from λ ( k ) . In particular, if λ ( k ) → λ ∗ then s ( k ) → s ∗ and vice-versa. In the remainderof this section let the fixed points be denoted as λ = λ ∗ and s = s ∗ . Our analysis is based on the assertion that if λ (and s )is a fixed point, then Condition 2 in (19) holds. In particular,we analyze the expression ( λ M ( s ( λ ))) r = z r (cid:107) e (cid:62) r Γ ( s ( λ )) Σ (cid:107) , which is (12) with explicit dependencies on s and λ . (Theconditions for (13) are done in a similar way, with an addi-tional constant for the derivatives of the line flow constraints: (cid:107) ∂ g (cid:14) ∂ x (cid:107) ≤ K g ). The solution to the nonlinear programmingproblem (18) is characterized by a set of optimality conditions.These conditions are defined by the Lagrangian: L ( s , µ , ρ ; λ ) = C ( s ) + µ (cid:62) f ( s ) + ρ (cid:62) h ( s ; λ ) , where µ ∈ R N and ρ ≥ ∈ R m . Subsequently, the Karush-Kuhn-Tucker (KKT) [19], [20] optimality conditions are theset of nonlinear conditions that define a solution of (18): ∂∂ s L ( s , µ , ρ ; λ ) = ( s ) = ( s , λ ) ≥ (20) ρ i h ( s , λ ) i = 0 , i = 1 , · · · , m ρ ≥ . The set of active inequality constraints is defined as A ( s ) = { i : h ( s , λ ) i = 0 } . Lagrange multipliers µ , ρ that satisfy the KKT conditions canbe found when the columns of the constraint Jacobians arelinearly independent, i.e., when (cid:20) ∂∂ s f ( s ) , ∂∂ s h ( s , λ ) i (cid:21) , i ∈ A ( s ) , (21)are linearly independent. Moreover, second order conditions(which state that the Lagrangian Hessian is positive definitein the nullspace of the constraint derivatives) ensure strict4ocal optimality of a KKT point. Finally, if the active set isunchanged in a small neighborhood around λ , then changes in s with regards to changes in λ are continuous. Summarizingwe assume the three conditions:1) Linear independence in the constraint Jacobians (21)2) Second order sufficient conditions hold for (18)3) Strict complementary slackness: ρ i > , i ∈ A ( s ) .The first two assumptions state that a local minimum forthe problem in (18) exists. The third ensures continuity ofderivatives, such as ∂ s ∂ λ and λ M ∂ λ . The analysis is reduced intothree smaller parts using ( λ ) n = λ n (the n th element):Part I: Representation of ∂ λ M ∂ λ Part II: Sensitivities ∂ s ∂λ n and ∂ f ∂λ n ∂ x Part III: Bound on (cid:13)(cid:13) ∂ λ M ∂ λ (cid:13)(cid:13) and Implications A. Part I: Representation of ∂ λ M ∂ λ This section develops the expressions that will be used laterto deduce the condition for convergence. Thus the Γ matrix isrepresented as a function of s and λ : Γ ( s ( λ )) = − J ( s ( λ )) − , where J ( s ( λ )) = J = ∂ f (cid:14) ∂ x (cf. (14)). For notation, we willbe using the following definition w r ( s ( λ )) ≡ ΣΓ ( s ( λ )) (cid:62) e r , (22)and subsequently rewrite ( λ M ( s ( λ ))) r = z r [ w r ( s ( λ )) (cid:62) w r ( s ( λ ))] / . Because Condition 2 in(19) involves partial derivatives, we take the partialderivative with respect to λ n (the ‘ n th’ tightening),so that ∂∂λ n ( λ M ( s ( λ ))) r = z r λ r (cid:104) w (cid:62) r ∂ w r ∂λ n (cid:105) , denoting ( λ M ( s ( λ ))) r = λ r in the right hand side. This expressionprovides a basis for what the matrix of partial derivativeswill look like. For indices ≤ r ≤ N and ≤ n ≤ N thematrix of sensitivities with respect to the vector λ is ∂ λ M ∂ λ = z λ (cid:104) w (cid:62) ∂ w ∂λ (cid:105) z λ (cid:104) w (cid:62) ∂ w ∂λ (cid:105) . . . z λ (cid:104) w (cid:62) ∂ w ∂λ n (cid:105) z λ (cid:104) w (cid:62) ∂ w ∂λ (cid:105) z λ (cid:104) w (cid:62) ∂ w ∂λ (cid:105) . . . z λ (cid:104) w (cid:62) ∂ w ∂λ n (cid:105) ... ... . . . ... z r λ r (cid:104) w (cid:62) r ∂ w r ∂λ (cid:105) z r λ r (cid:104) w (cid:62) r ∂ w r ∂λ (cid:105) . . . z r λ r (cid:104) w (cid:62) r ∂ w r ∂λ n (cid:105) (23)Note that since w (cid:62) r is a row vector and ∂ w r ∂λ n is a column vectorthe entries in (23) can be written as w (cid:62) r ∂ w r ∂λ n = (cid:107) w r (cid:107) (cid:13)(cid:13)(cid:13)(cid:13) ∂ w r ∂λ n (cid:13)(cid:13)(cid:13)(cid:13) cos( φ rn ) , (24)where φ rn represents the angle between w r and ∂ w r ∂λ n . Since λ r = z r (cid:107) w r (cid:107) and (cid:107) w r (cid:107) = λ r z r , therefore the elements ofthe matrix in (23) are z r λ r (cid:104) w (cid:62) r ∂ w r ∂λ n (cid:105) = z r (cid:13)(cid:13)(cid:13) ∂ w r ∂λ n (cid:13)(cid:13)(cid:13) cos( φ rn ) . From (22) it holds that (cid:13)(cid:13)(cid:13) ∂ w r ∂λ n (cid:13)(cid:13)(cid:13) = (cid:13)(cid:13)(cid:13) Σ ∂ Γ (cid:62) ∂λ n e r (cid:13)(cid:13)(cid:13) , which yieldsthe subsequent inequalities (cid:18) ∂ λ M ∂ λ (cid:19) rn ≤ | z r | (cid:107) Σ (cid:107) (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∂ Γ (cid:62) ∂λ n e r (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) . (25)Note that z r and Σ are constants, with upper bounds z r ≤ max | z r | ≤ r ≤ N ≡ K and (cid:107) Σ (cid:107) ≤ K . Therefore, a bound for (cid:13)(cid:13)(cid:13) ∂ Γ (cid:62) ∂λ n e r (cid:13)(cid:13)(cid:13) fully specifies (25), and thus the elements of (23). B. Part II: Sensitivities ∂ s ∂λ n and ∂ f ∂λ n ∂ x In order to derive a bound on (cid:107) ∂ Γ (cid:62) ∂λ n e r (cid:107) = (cid:107) e (cid:62) r ∂ Γ ∂λ n (cid:107) ,recall that Γ ( s ( λ )) = J ( s ( λ )) − = J − . In this respect, notethat we can make use of the identity ∂∂λ n J − = − J − (cid:18) ∂∂λ n J (cid:19) J − . Specifically, defining the vector j (cid:62) r ≡ e (cid:62) r J − then (cid:13)(cid:13)(cid:13)(cid:13) e (cid:62) r ∂ J − ∂λ n (cid:13)(cid:13)(cid:13)(cid:13) = (cid:13)(cid:13)(cid:13)(cid:13) j (cid:62) r ∂ J ∂λ n J − (cid:13)(cid:13)(cid:13)(cid:13) ≤ (cid:13)(cid:13) J − (cid:13)(cid:13) (cid:13)(cid:13)(cid:13)(cid:13) j (cid:62) r ∂ J ∂λ n (cid:13)(cid:13)(cid:13)(cid:13) . With a bound on (cid:13)(cid:13) J ( s ( λ )) − (cid:13)(cid:13) ≤ K Γ (from (16)), it remainsto derive an upper bound on (cid:13)(cid:13)(cid:13) j (cid:62) r ∂∂λ n J ( s ( λ )) (cid:13)(cid:13)(cid:13) . Next wedescribe the essential components for such a bound, basedon the elements of the nd derivative matrices ∂∂λ n J ( s ( λ )) = ∂∂λ n (cid:18) ∂ f ∂ x (cid:19) = (cid:104) ∂ f ∂λ n ∂ q gG ∂ f ∂λ n ∂ v L ∂ f ∂λ n ∂ θ (cid:105) . The elements of ∂∂λ n (cid:0) ∂ f ∂ x (cid:1) can be computed from (15) once ∂∂λ n v i ≡ ∂ n v i , ∂∂λ n θ i ≡ ∂ n θ i , (26)for ≤ n ≤ N , ≤ i ≤ N are known. This is thecase, because ∂∂λ n ( v i c ki ) and ∂∂λ n ( v i d ki ) , for ≤ k ≤ N ,(in e.g., (15)) can be computed from these quantities. Inclassical nonlinear programming theory, using the assumptionsin Section III, the existence of a parametrized solution to (18)with dependence on λ is defined by a ( λ ) ≡ s ( λ ) µ ( λ ) ρ ( λ ) . Selecting a set of rows in a ( λ ) , extracts s ( λ ) . The derivativeof a ( λ ) w.r.t. λ can be computed from the KKT conditions ∂∂ s L ( a ( λ ) , λ ) f ( s ( λ )) ρ i h ( s ( λ ) , λ ) i ≡ G ( a ( λ ) , λ ) = , i ∈ A ( s ) . (27)From (27) it holds that ∂∂λ n G ( a ( λ ) , λ ) = ∂ G ∂ a ∂ a ∂λ n + ∂ G ∂λ n = , and ∂ a ∂λ n = − (cid:2) ∂ G ∂ a (cid:3) − ∂ G ∂λ n . Thus ∂ s ∂λ n is obtained by extractingelements from ∂ a ∂λ n . The existence of the inverse and hence thederivatives of a are guaranteed by [16, Theorem 3.2]. Becausethe derivatives are finite a bound of the form (cid:13)(cid:13) ∂ a ∂λ n (cid:13)(cid:13) ≤ K a exists, which implies (cid:13)(cid:13) ∂ s ∂λ n (cid:13)(cid:13) ≤ K a and (cid:13)(cid:13) ∂ x ∂λ n (cid:13)(cid:13) ≤ K a .Moreover, the partial derivatives of ∂∂λ n (cid:0) ∂ f ∂ x (cid:1) are computedfrom (15) from which ∂∂λ n (cid:16) ∂ f ∂ q gG (cid:17) = . Since the power flowequations hold with equality at the solution, we simplify thesummations in (15), using the definitions p net i ≡ p gi − p di q net i ≡ q gi − q di , as e.g., (cid:80) Nk =1 v k c ik = p net i v i (and correspondingly for5he other terms). Then for ≤ i ≤ N and j = N + i as wellas ≤ l ≤ N L and ≤ t ≤ N∂∂λ n (cid:18) ∂ f ∂ v L (cid:19) i,l = (cid:40) ∂ n ( p net i v i ) + 2 ∂ n ( c ii v i ) if i = [ L ] l ∂ n ( v i c i [ L ] l ) otherwise ∂∂λ n (cid:18) ∂ f ∂ v L (cid:19) j,l = (cid:40) ∂ n ( q net i v i ) + 2 ∂ n ( d ii v i ) if i = [ L ] l ∂ n ( v i d i [ L ] l ) otherwise ∂∂λ n (cid:18) ∂ f ∂ θ (cid:19) i,t = (cid:40) − ∂ n q net i if i = tv k d ik ∂ n v i + v i ∂ n ( v k d ik ) otherwise ∂∂λ n (cid:18) ∂ f ∂ θ (cid:19) j,t = (cid:40) ∂ n q net i if i = t − ( v k c ik ∂ n v i + v i ∂ n ( v k c ik )) otherwiseThese derivatives can be bounded with limits on ∂ n p net i , ∂ n q net i , ∂ n v i , ∂ n θ i , ∂ n c ik , ∂ n d ik , ∂ n ( v k c ik ) , and ∂ n ( v k d ik ) (i.e., all elements that appear in the expressionof the derivatives). Since all these individual elements arebounded by K a , and since the c ik , d ik terms are bounded,too (they depend on v i , θ i ), the maximum element will bebounded by a constant, as well. Denote this constant by K x ,then max ≤ i,j ≤ N (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:18) ∂∂λ n (cid:18) ∂ f ∂ x (cid:19)(cid:19) ij (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≡ K x . With this, since (cid:107) j r (cid:107) ≤ K Γ and ∂ f ∂λ n ∂ x = ∂∂λ n J ( s ( λ )) , weobtain the upper bound (cid:13)(cid:13)(cid:13)(cid:13) j (cid:62) r ∂ J ( s ( λ )) ∂λ n (cid:13)(cid:13)(cid:13)(cid:13) ≤ K Γ K x N. (28) C. Part III: Bound on (cid:13)(cid:13) ∂ λ M ∂ λ (cid:13)(cid:13) and Implications The analysis from the previous section has implicationsfor the convergence of Algorithm 1. In particular, since (cid:107) J ( s ( λ )) − (cid:107) ≤ K Γ (cf. Section I-C2) a bound on themagnitude of (23) can be found. By combining (25) and (28)and noting that typically only N A (cid:28) N + N L inequalityconstraints are active at the solution, the upper bound is (cid:13)(cid:13)(cid:13)(cid:13) ∂ λ M ∂ λ (cid:13)(cid:13)(cid:13)(cid:13) ≤ · (cid:107) Σ (cid:107) · K · K Γ · K x · N A · N. (29)If the right hand side in (29) is less than one then Condition 2in (19) of the fixed point iteration is guaranteed to hold. Theupper bound in (29) is composed of the parts (cid:107) Σ (cid:107) , K , K Γ · K x · N A · N. Typically, there is digression on the values of K and (cid:107) Σ (cid:107) ,whereas K Γ · K x · N A · N is problem (network specification)dependent. For instance, K is reduced by reducing theprobability constraints in (6). Concretely, if all (cid:15) → / then K → . Secondly, (cid:107) Σ (cid:107) is regarded as a parameter to theiterative algorithm and lower values of this parameter promotefaster convergence. In particular, if (cid:107) Σ (cid:107) is small enough(sufficiently small uncertainty) then from (19) the fixed pointiteration is guaranteed to converge . Finally, the bound (29)includes the network dimension N . In numerical experiments,we observe that setting (cid:107) Σ (cid:107) ∼ O (1 /N ) enables the solutionof large networks. D. Computing a Bound Estimate
Before applying Algorithm 1 for multiple iterations thebound in (29) can be qualitatively used to asses the conver-gence behavior of the method. In particular, at k = 0 , use s (1) to compute the right hand side in (29). Computing K Γ isavailable from a LU factorization of J . The computation of K x can be approximated by first estimating K a from a lowerbound on the smallest singular value of ∂ G /∂ a using [15](as for (16)), calling such an estimate ˆ K a . Because λ appearslinearly in the inequalities h ( s , λ ) = vec ( g ( s ) , g ( s )+ λ ) (forappropriately defined g ( s ) , g ( s ) ) and in no other constraints,the expression ∂ G /∂λ n = vec ( , e n ) simplifies. Subse-quently, K a ∼ / ˆ K a and K x = aK a , for a constant a > . Ifthe computed bound, say B (0) exceeds a fixed threshold, set Σ Scaled = 1 /B (0) Σ . This ensures that (cid:107) ∂λ r /∂ λ (cid:107) ≤ (cid:107) Σ (cid:107) insubsequent iterations. Moreover, we include a scaling factor γ g in computing the line flow constraint tightenings from (13),thus ¯ λ r = γ g λ r . Note that for γ g = 1 no scaling is included.IV. N UMERICAL E XPERIMENTS
This section describes numerical experiments on a set ofstandard IEEE test problems. Typically, solving large chance-constrained optimization problems is numerically very chal-lenging. Even directly solving the reformulated problem in(17) with state-of-the-art general purpose methods becomesquickly intractable. Therefore, for larger networks we applyAlgorithm 1 to approximately solve (17). Our implementationof Algorithm 1 uses Julia 1.1.0 [21] and the modeling librariesJuMP v0.18.6 [22] and StructJuMP [23]. In particular, in orderto solve the AC-OPF problem from (2) and its modificationsin Line 3 of Algorithm 1 we use the general purpose nonlinearoptimization solver IPOPT [24]. The stopping criteria in Algo-rithm 1 are set as τ q = 1 × − , τ v = 1 × − , τ θ = 1 × − , τ g = 1 × − . Unless otherwise stated, we set Σ = σ I , σ = 1 /N and (cid:15) = vec ( (cid:15) q , (cid:15) v , (cid:15) θ , (cid:15) g ) = vec (0 . , . , . , . ,and γ g = 1 /N L . The initial values are computed as the mid-point between the upper and lower bounds s (0) = 1 / u + l ) .The maximum number of iterations for Algorithm 1 is set asmaxiter = 50 . The test cases are summarized in Table I: The TABLE IT
EST C ASES . C
OLUMN CONTAINS THE TOTAL NUMBER OF NODES INTHE NETWORK , N , AND THEIR SPLIT AMONG GENERATORS , N G , ANDLOADS , N L . T HE LARGEST CASE CONTAINS MORE THAN < N
NODES . Problem N = N G + N L N line ReferenceIEEE 9 [6]IEEE 30
30 = 6 + 24 41 [6], [25]IEEE 118
118 = 54 + 64 186 [6]IEEE 300
300 = 69 + 231 411 [6]IEEE 1354pegase [26]IEEE 2869pegase [26]IEEE 9241pegase [26] numerical experiments are divided into two parts. ExperimentI compares solutions to (17) using a direct solver (i.e., IPOPT)or the iterative approach from Algorithm 1. Experiment IIreports convergence results on the test problems from Table I.6 .8 0.85 0.9 0.955297.45297.65297.852985298.25298.4
Fig. 1. Comparison of optimal objective function values using CC-ACOPF(direct approach) and CC-ACOPF:Alg.1. The optimal objective function val-ues nearly coincide on this IEEE 9 bus case as the parameters (cid:15) v (probabilitythresholds) vary. Fig. 2. Comparison of optimal objective function values using CC-ACOPF(direct approach) and CC-ACOPF:Alg.1. Also on this IEEE 30 bus casethe optimal objective function values nearly coincide as the parameters (cid:15) v (probability thresholds) vary. A. Experiment I
This experiment compares the optimal objective functionvalues of solving (17) directly (CC-ACOPF) using IPOPTor using Algorithm 1 (CC-ACOPF:Alg.1). Solving (17) inJuMP becomes computationally intractable beyond N = 100 ,because the inverse, Γ ( s ) ∈ R N × N , and its derivativesare continuously recomputed. For this reason, we compareCC-ACOPF and CC-ACOPF:Alg.1 for the 9 bus and 30 buscases (which are both tractable by the direct approach). Inparticular, we compare the solved objective function valuesafter perturbing the problem formulation slightly. Specifically,we compare the computed objective function values whenthe values (cid:15) v = { . , . , . . . , . } change the probabilityconstraints. (Varying other parameters yields similar results).The outcomes of the 9 bus network are in Figure 1. Figure 2contains the outcomes of applying both approaches on the 30bus network. Observe in both figures that the optimal objectivefunction values, i.e., C ( s ∗ ) , increase as the values of − (cid:15) v increase. This is because larger values of − (cid:15) v correspondto stricter chance constraints. Importantly, observe that theobjective function values are virtually equivalent for these testcases, as the values of the blue and red curves nearly exactlymatch up. However, the main advantage of Algorithm 1 is thatit scales to larger cases, too. TABLE IIC
OMPARISON OF A LGORITHM LG .1) AND A DIRECTSOLUTION APPROACH (CC-ACOPF)
ON A SET OF
IEEE
POWER FLOWNETWORKS . T
HE LAST ROWS IN THE “O BJECTIVE ” COLUMN FOR
CC-ACOPF
ARE LABELED
N/A , BECAUSE THIS APPRAOCH TOOKCOMPUTATIONAL TIMES IN EXCESS OF HOURS . IEEE CC-ACOPF:Alg.1 CC-ACOPFIts. Objective Time(sec) Objective Time(sec) † >5h † >5h † >5h † >5h † >5h † >5h B. Experiment II
Experiment II reports results of applying Algorithm 1(CC-ACOPF:Alg.1) and a direct solver (CC-ACOPF) to theproblem from (17) on the test problems in table IV. For CC-ACOPF:Alg.1 the number of iterations, time and “optimal”objective values are reported. For CC-ACOPF the “optimal”objective values and the time are reported. Algorithm 1 con-verged on all of the reported problems. These problems includelarge network instances, too. Observe in Table IV-B that forproblems on which both solvers can be used, the optimalobjective function values are close to each other. However,for large problems only Algorithm 1, based on a fixed pointiteration, is practical. V. C
ONCLUSION
This article develops a chance constrained AC optimalpower flow model, in which the objective function onlydepends on deterministic variables and therefore enables im-mediate interpretation of the optimal function values. Bylinearizing the stochastic variables, a deterministic nonlinearoptimization problem is described in lieu of the probabilisticone. Because solving the reformulated optimization problemis computationally challenging, we analyze the convergencecriteria for a fixed point algorithm that solves a sequence ofmodified AC optimal power flow problems and enables scalingto larger network sizes. The analysis connects the model’svariance of the uncertainty and the constraint probabilitiesto the convergence properties of the algorithm. In numericalexperiments, we compare the fixed point iteration to directlysolving the reformulated problem and test the method on IEEEproblems, including a network with over 9000 nodes. Certainlyour bounds are quite conservative in this version. However,this the first attempt at proving convergence of the approach.Since iteratively resolving a sequence of tractable optimizationproblems (by holding specific nonlinear terms fixed) has beenvery effective on a variety of power systems applications, ouranalysis of such a fixed point method lays the ground for theconvergence criteria of new future algorithms in this category.A
CKNOWLEDGEMENTS
This work was supported by the U.S. Department of Energy,Office of Science, Advanced Scientific Computing Research,7nder Contract DE-AC02-06CH11357 at Argonne NationalLaboratory and by NSF through award CNS-51545046. Weacknowledge fruitful discussion with Dr. Line Roald, whopointed out that Algorithm 1 was observed to cycle betweendifferent points, and encouraged an analysis of its convergencecriteria. We also thank Eric Grimm, who helped in carryingout parts of the numerical experiments in Section IV.R
EFERENCES[1] H. Zhang and P. Li, “Chance constrained programming for optimalpower flow under uncertainty,”
IEEE Transactions on Power Systems ,vol. 26, no. 4, pp. 2417–2424, Nov 2011.[2] G. Li and X. P. Zhang, “Stochastic optimal power flow approachconsidering correlated probabilistic load and wind farm generation,”in
IET Conference on Reliability of Transmission and DistributionNetworks (RTDN 2011) , Nov 2011, pp. 1–7.[3] M. Hojjat and M. H. Javidi, “Chance-constrained programming approachto stochastic congestion management considering system uncertainties,”
IET Generation, Transmission Distribution , vol. 9, no. 12, pp. 1421–1429, 2015.[4] K. Baker, E. Dall’Anese, and T. Summers, “Distribution-agnosticstochastic optimal power flow for distribution grids,” in , Sep. 2016, pp. 1–6.[5] M. Vrakopoulou, M. Katsampani, K. Margellos, J. Lygeros, and G. An-dersson, “Probabilistic security-constrained ac optimal power flow,” in , June 2013, pp. 1–6.[6] R. D. Zimmerman, C. E. Murillo-S´anchez, and R. J. Thomas, “Mat-power: Steady-state operations, planning, and analysis tools for powersystems research and education,”
IEEE Transactions on Power Systems ,vol. 26, no. 1, pp. 12–19, Feb 2011.[7] M. Lubin, Y. Dvorkin, and L. Roald, “Chance constraints for improvingthe security of ac optimal power flow,”
IEEE Transactions on PowerSystems , vol. 34, no. 3, pp. 1908–1917, 2019.[8] D. Bienstock, M. Chertkov, and S. Harnett, “Chance-constrained opti-mal power flow: Risk-aware network control under uncertainty,”
SiamReview , vol. 56, no. 3, pp. 461–495, 2014.[9] D. Lee, K. Turitsyn, D. K. Molzahn, and L. A. Roald, “Feasible pathidentification in optimal power flow with sequential convex restriction,”
IEEE Transactions on Power Systems , vol. 35, no. 5, pp. 3648–3659,2020.[10] V. Frolov, L. Roald, and M. Chertkov, “Cloud-ac-opf: Model reductiontechnique for multi-scenario optimal power flow via chance-constrainedoptimization,” in , 2019, pp. 1–6.[11] L. Roald and G. Andersson, “Chance-constrained ac optimal power flow:Reformulations and efficient algorithms,”
IEEE Transactions on PowerSystems , vol. 33, no. 3, pp. 2906–2918, May 2018.[12] J. Schmidli, L. Roald, S. Chatzivasileiadis, and G. Andersson, “Stochas-tic ac optimal power flow with approximate chance-constraints,” in , July 2016,pp. 1–5.[13] R. G. J. Miller,
Simultaneous Statistical Inference . Springer-VerlagNew York, 1981.[14] D. Bienstock,
Electrical Transmission System Cascadesand Vulnerability . Philadelphia, PA: Society for Indus-trial and Applied Mathematics, 2015. [Online]. Available:https://epubs.siam.org/doi/abs/10.1137/1.9781611974164[15] Y. Hong and C.-T. Pan, “A lower bound for thesmallest singular value,”
Linear Algebra and its Applications
Systems and Optimization ,A. Bagchi and H. T. Jongen, Eds. Berlin, Heidelberg: Springer BerlinHeidelberg, 1985, pp. 74–97.[17] A. V. Fiacco, “Nonlinear programming sensitivity analysis results usingstrong second order assumptions,” in
Numerical Optimization of Dy-namic Systems , L. C. W. Dixon and e. G. P. Szeg¨o, Eds. North-Holland,Amsterdam: Springer Berlin Heidelberg, 1980, pp. 327–348.[18] S. H. Strogatz,
Nonlinear Dynamics and Chaos: With Applications toPhysics, Biology, Chemistry, and Engineering . Addison-Wesley, 1994.[19] W. Karush, “Minima of functions of several variables with inequalities asside conditions,” Master’s thesis, Department of Mathematics, Universityof Chicago, Illinois, USA, 1939. [20] H. W. Kuhn and A. W. Tucker, “Nonlinear programming,” in
Proceedings of the Second Berkeley Symposium on MathematicalStatistics and Probability . Berkeley, Calif.: University ofCalifornia Press, 1951, pp. 481–492. [Online]. Available:https://projecteuclid.org/euclid.bsmsp/1200500249[21] J. Bezanson, A. Edelman, S. Karpinski, and V. B. Shah, “Julia: A freshapproach to numerical computing,”
SIAM review , vol. 59, no. 1, pp.65–98, 2017. [Online]. Available: https://doi.org/10.1137/141000671[22] I. Dunning, J. Huchette, and M. Lubin, “Jump: A modeling language formathematical optimization,”
SIAM Review , vol. 59, no. 2, pp. 295–320,2017.[23] M. Anitescu and G. C. Petra, “StructJuMP a block-structured optimiza-tion framework for jump,” https://github.com/StructJuMP/StructJuMP.jl(retrieved Mar. 27th, 2019), 2019.[24] A. W¨achter and L. T. Biegler, “On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming,”
Math. Program. , vol. 106, pp. 25–57, 2006.[25] O. Alsac and B. Stott, “Optimal load flow with steady-state security,”
IEEE Transactions on Power Apparatus and Systems , vol. PAS-93, no. 3,pp. 745–751, May 1974.[26] S. Fliscounakis, P. Panciatici, F. Capitanescu, and L. Wehenkel, “Con-tingency ranking with respect to overloads in very large power systemstaking into account uncertainty, preventive, and corrective actions,”
IEEETransactions on Power Systems , vol. 28, no. 4, pp. 4909–4917, Nov2013.
Johannes Brust is a postdoctoral researcher in thein the Mathematics and Computer Science Divisionat Argonne National Laboratory, IL. He receiveda M.Sc. in financial engineering from MaastrichtUniversity and a Ph.D. in applied mathematics fromthe University of California, Merced. His researchis on effective large scale computational methodsapplied to optimal power flow problems.
Mihai Anitescu
Is a senior computational math-ematician in the Mathematics and Computer Sci-ence Division at Argonne National Laboratory anda professor in the Department of Statistics at theUniversity of Chicago. He obtained his engineerdiploma (electrical engineering) from the Polytech-nic University of Bucharest in 1992 and his Ph.D.in applied mathematical and computational sciencesfrom the University of Iowa in 1997. He specializesin the areas of numerical optimization, computa-tional science, numerical analysis and uncertaintyquantification in which he has published more than 100 papers in scholarlyjournals and book chapters. He is on the editorial board of the SIAM Journalon Optimization and he is a senior editor for Optimization Methods andSoftware, he is a past member of the editorial boards of the MathematicalProgramming A and B, SIAM Journal on Uncertainty Quantification, andSIAM Journal on Scientific Computing. He has been recognized for his workin applied mathematics by his selection as a SIAM Fellow in 2019.
The submitted manuscript has been created by UChicago Argonne, LLC,Operator of Argonne National Laboratory (“Argonne”). Argonne, a U.S.Department of Energy Office of Science laboratory, is operated underContract No. DE-AC02-06CH11357. The U.S. Government retains foritself, and others acting on its behalf, a paid-up nonexclusive, irrevo-cable worldwide license in said article to reproduce, prepare deriva-tive works, distribute copies to the public, and perform publicly anddisplay publicly, by or on behalf of the Government. The Depart-ment of Energy will provide public access to these results of federallysponsored research in accordance with the DOE Public Access Plan. http://energy.gov/downloads/doe-public-accessplanhttp://energy.gov/downloads/doe-public-accessplan