Efficient Parameter Selection for Scaled Trust-Region Newton Algorithm in Solving Bound-constrained Nonlinear Systems
aa r X i v : . [ m a t h . O C ] S e p Efficient Parameter Selection for ScaledTrust-Region Newton Algorithm in SolvingBound-constrained Nonlinear Systems
Hengameh Mirhajianmoghadam , S. Mahmood Ghasemi
1. Department of Electrical Engineering, Ferdowsi University of Mashhad,Mashhad, Iran [email protected] of Mathematics, University of Houston, Houston, TX, [email protected]
Abstract
We investigate the problem of parameter selection for the scaled trust-region New-ton (STRN) algorithm in solving bound-constrained nonlinear equations. Numericalexperiments were performed on a large number of test problems to find the bestvalue range of parameters that give the least algorithm iterations and function eval-uations. Our experiments demonstrate that, in general, there is no best parameter tobe chosen and each specific value shows an efficient performance on some problemsand a weak performance on other ones. In this research, we report the performanceof STRN for various choices of parameters and then suggest the most effective one.
Key words:
Trust-Region Methods, Nonlinear System of Equations,Bound-Constrained Optimizations, Parameter Selection
Optimization plays a key role in contemporary science, including engineering, statis-tics, computer science, physics, and applied mathematics. If the physics of a phe-nomenon is properly and comprehensively captured in the associated mathematicalmodel, a huge number of real-world problems translate into solving mathematicalproblems; especially in the form of optimization [15,12,20,13]. Most practical prob-lems may include nonlinear constraints, but constrained optimization remarkablyrelies on the techniques used in unconstrained optimization. Consequently, the morefruitful unconstrained optimization algorithms are, the more efficient methods canbe proposed for the constrained optimization. Therefore, it is of extreme impor-tance to investigate the properties of the main algorithms used in unconstrained
Preprint submitted to September 10, 2020 ptimization. Here, we examine the performance of a vital algorithm in the trust-region paradigm. We study the efficient parameter selection for the scaled trust-region Newton (STRN) algorithm in solving bound-constrained nonlinear systems.We demonstrate that its performance notably relies on the choices of its hyper-parameters and further suggest how to choose the most effective parameters.We consider the STRN algorithm proposed by Bellavia et al. [2] and its Matlabsolver STRSCNE [3]. This numerical algorithm solves the bound-constrained non-linear system of equations using an affine scaling trust-region method. The problemis F ( x ) = 0; x ∈ Ω , (1.1)where F ( x ) = ( F ( x ) , . . . , F n ( x )) T and Ω = { x ∈ R n | l ≤ x ≤ u } . The vectors l ∈ ( R ∪ −∞ ) n and u ∈ ( R ∪ ∞ ) n are lower and upper bounds, respectively. Thefunction F is continuously differentiable in an open set X ⊂ R n containing then-dimensional box Ω . These kind of systems appear in chemical process modelingand in steady-state simulation [21,4].An approach to solve (1.1) is a bound-constrained nonlinear least square problem: min x ∈ Ω f ( x ) := 12 k F ( x ) k (1.2)Nonlinear least square problems have been studied in the literature [18,10,11,23].Bellavia et al. [2] generalized the trust-region strategy for unconstrained systemsof nonlinear equations to bound-constrained systems and proposed the STRN; areliable method for tackling (1.1). This method generates feasible iterates with lo-cally and globally fast convergence properties. A large set of problems was usedto test the efficiency of the STRN. In [2], a comparison with the ASTN [10] andIGNT [11] and in [6,7] a comparison between the STRN and NMAdapt [8] has beenperformed and superiority of the method has been proved. The method has beenwidely applied in engineering fields [5,14,9,17,19].In this paper, we run the iterative algorithm STRN using its implementation inMatlab solver called STRCNE (Scaled Trust-Region Solver for Constrained Nonlin-ear Equations) on various problems and extract the most useful parameter values.The motivation behind this fine tuning is that the fast-increasing availability ofmassive data sets has boosted up the use of sophisticated optimization algorithms[1,16] with fast convergence.The remaining of the paper is as follows: in section 2, we explain the STRN al-gorithm. Section 3 presents the parameter selecting experiments and the achievednumerical results. Section 4 is the conclusion together with the related figures andtables. 2 The STRN Algorithm
Let x k ∈ int (Ω) be the current iteration. Next iteration is x k +1 = x k + p k where p k is computed by solving the following elliptical trust-region subproblem min p m k ( p ) subject to k D k p k ≤ ∆ k . (2 . Here, ∆ k is the trust-region size, D k = D ( x k ) is the diagonal scaling matrix suchthat: D ( x ) = diag ( | v ( x ) | − , | v ( x ) | − , ..., | v n ( x ) | − ) , and v ( x ) denotes the vector function given by: v i ( x ) = x i − u i if ( ▽ f ( x )) i < , and u i < ∞ ; v i ( x ) = x i − l i if ( ▽ f ( x )) i ≥ , and u i < −∞ ; v i ( x ) = − if ( ▽ f ( x )) i < , and u i = ∞ ; v i ( x ) = 1 if ( ▽ f ( x )) i ≥ , and u i = −∞ , and v = ( v , ..., v n ) . The quadratic model for f in (2.1) is as follows: m k ( p ) = 12 k F ′ k p + F k k = 12 k F k k + F Tk F ′ k p + 12 p T F ′ Tk F ′ k p = f k + ▽ f Tk p + 12 p T F ′ Tk F ′ k p. The scaled steepest descent direction d k is given by d k = − D − k ▽ f k . The trial step p k can be calculated by the following procedure (see [2,3] for detail). Procedure to calculate the trial step:
Let ▽ f k , D k , and ∆ k be given.1. Calculate the Newton step p Nk by solving F ′ k p Nk = − F k
2. If (cid:13)(cid:13)(cid:13) D k p Nk (cid:13)(cid:13)(cid:13) ≤ ∆ k then set p k = p Nk and stop.3. Compute e p uk = − k D − k ▽ f k k k F ′ k D − k ▽ f k k D − k ▽ f k
4. If k e p uk k ≥ ∆ k , then set e p k = ∆ k D − k ▽ f k / (cid:13)(cid:13)(cid:13) D − k ▽ f k (cid:13)(cid:13)(cid:13) else set e p Nk = Dp Nk and compute µ solving (cid:13)(cid:13)(cid:13) e p uk + (1 − µ )( e p Nk − e p uk ) (cid:13)(cid:13)(cid:13) = ∆ k and set e p k = e p uk + (1 − µ )( e p Nk − e p uk )
5. Let p k = D − k e p k . 3ow to ensure that the next iterate stays within Ω , lets calculate the step size λ ( p k ) along p k to the boundary λ ( p k ) = ∞ if Ω = R n min ≤ i ≤ n Λ i if Ω ⊂ R n , where Λ i = max n l i − x ki p ki , u i − x ki p ki o if p k i = 0 ∞ if p k i = 0 . if λ ( p k ) > then x k + p k is within Ω , if λ ( p k ) ≤ then a step back along p k isnecessary. Parameter θ controls the amount of truncation: x k +1 = x k + α ( p k ) α ( p k ) = p k if λ ( p k ) > { θ, − k p k k} λ ( p k ) p k if λ ( p k ) ≤ .θ ∈ (0 , is a fixed constant and it is one of the parameters that we will find anoptimal range for.In order to warranty sufficient reduction, we have to consider the Cauchy point p ck ,the minimizer of m k along the scaled steepest descent direction d k = − D − k ▽ f k , atthe other hand new iterate should lie within the trust-region so the steepest descenthas to satisfy the trust-region bound [18] p ck = τ k d k = − τ k D − k ▽ f k , where τ k = argmin τ> { m k ( τ d k ) : k D k d k k ≤ ∆ k } = min (cid:13)(cid:13)(cid:13) D − k ▽ f k (cid:13)(cid:13)(cid:13) (cid:13)(cid:13)(cid:13) F ′ k D − k ▽ f k (cid:13)(cid:13)(cid:13) , ∆ k (cid:13)(cid:13)(cid:13) D − k ▽ f k (cid:13)(cid:13)(cid:13) . Then we test if the step α ( p k ) satisfies the following condition: ρ ck ( p k ) = m k (0) − m k ( α ( p k )) m k (0) − m k ( α ( p ck )) ≥ β where β ∈ (0 , is constant. If this condition holds, we discard p k and set p k = p ck .The agreement between the model m k and the merit function f can be achieved bytesting the following condition: ρ fk ( p k ) = f ( x k ) − f ( x k + α ( p k )) m k (0) − m k ( α ( p k )) ≥ β β ∈ (0 , . If this condition holds then x k + α ( p k ) is the next iterate. If notthen we have to decrease the trust-region size by ∆ k = min { α ∆ k , α k D k α ( p k ) k} for < α ≤ α < and recalculate new step.Finally, in order to accelerate the convergence rate, we should take big steps byincreasing the trust-region radius wisely, If the agreement between the function andthe model is strong enough then we shouldn’t miss the opportunity to take a betterimprovement. The condition below satisfies this: ρ fk ( p k ) = f ( x k ) − f ( x k + α ( p k )) m k (0) − m k ( α ( p k )) ≥ β ,β ∈ (0 , is a constant such that β < β < . If this condition holds then ∆ k +1 = max { ∆ k , γ k D k α ( p k ) k} , γ > otherwise the trust-region radius remains same.The STRSCNE algorithm is as follows:Algorithm, The Scaled Trust-Region Solver [3]- Initialization:Given x ∈ int (Ω) , ∆ > , θ ∈ (0 , , < α ≤ α < , β ∈ (0 , , < β < β < . - For k = 0, 1, . . . do:1. Compute F k .2. Check for convergence.3. Compute the matrix D k by using (1.5).4. Compute the matrix F ′ k .5. Compute p Nk by solving the linear system F ′ k p Nk = − F k .6. Repeat6.1. Compute an approximate solution p k of (2.1) by using Procedure to calculatethe trial step.6.2. Compute τ k by and the Cauchy point p ck .6.3. Compute α ( p k ) and α ( p ck ) .6.4. Compute ρ ck ( p k ) .6.5. If ρ ck ( p k ) < β then set p k = p ck .6.6. Set ∆ ∗ k = ∆ k and decrease ∆ k ρ fk ( p k ) .Until ρ fk ( p k ) ≥ β
7. Set x k +1 = x k + α ( p k ) , ∆ k = ∆ ∗ k
8. If ρ fk ≥ β thenset ∆ k +1 = max { δ k , γ k D k α ( p k ) k} else set ∆ k +1 = ∆ k k F k +1 k ≤ − .Failure happens if1) A maximum number of iterations are performed.2) A maximum number of F-evaluations are performed.3) The trust-region size is reduced below −
4) The relative change in the function value satisfies k F K +1 − F k k ≤ − k F k k
5) The norm of the scaled gradient of the merit function becomes small: (cid:13)(cid:13)(cid:13) D − k ▽ f k (cid:13)(cid:13)(cid:13) < −
6) The scaling matrix D k cannot be computed. In our experiments, we used Matlab 2018b. For the initialization, we set ∆ =1 , θ = 0 . , α = 0 . , α = 0 . , β = 0 . , β = 0 . , β = 0 . , γ = 2 . α - controls the size reduction of the trust-region Among the 45 studied problems, 32 problems are insensitive for different valuesof α while 13 problems are sensitive for at least one of the initial points. Afterconsecutive runs of the algorithm, we narrowed them down to some candidate valuesfor each parameter. Table 1 shows these values. There is no best parameter for α .For example, α = 0 . is the best option for solving Foureq1 starting from the third6 while for Threeq6 starting from the fourth x , or seveneq2b starting from Third x , α = 0 . is much better. Generally speaking, a value between (0.4,0.5) is thebest choice (Fig 1). It seems that a harsh and severe shrinking of the trust-regionis not always the best approach, we prefer to cut the trust-region size at most inhalf. Figure 1. - controls the decreasing of the trust-region size N u m be r o f I T R o r F - E v a l Fine tuning for -Problem Nineq1 ,Fourth x IterationF-Evaluation α - controls the size reduction of the trust-region Among the 45 studied problems, 36 problems are insensitive to different values of α while 9 problems are sensitive for at least one of the initial points. Again, there is nobest choice and depending on the problem the performance changes. For example,for Threeq5 (Fig 2) 0.24 and 0.45 are best choices while for other problems theyare not suitable. by looking at Table 2 the overall performance of α = 0 . , . areslightly better. α = 0 . is the best choice for problem Seveneq2b, it converges tothe answer in 130 iterations while other selections of the parameter need a largeiteration number until getting convergent. α = 0 . and α = 0 . also show prettygood performance. β -used for accuracy requirements Among the 45 studied problems, 43 problems are insensitive for different values of β while 2 problems are sensitive to at least one of the initial points. By lookingat Table 3, as β increases (The reduction in the model when we use p k is close tothe reduction in the model when we use p ck ) the algorithm becomes faster. β = 0 . igure 2. - controls the decreasing of the trust-region size N u m be r o f I T R o r F - E v a l Fine tuning for -Problem Threeq5 ,Second x IterationF-Evaluation shows good performance. The problems are mostly insensitive to the different valuesof β . β - used to ensure agreement between the model and the objective Among the 45 studied problems, 36 problems are insensitive for different values of β while 9 problems are sensitive for at least one of the initial points. From Table4 clearly β = 0 . is the best choice. As β increases the agreement between themodel and the function becomes harder and the method becomes slower. We havethis situation in problem Nineeq1 (Fig 3): Figure 3. - used to ensure agreement between the model and the objective N u m be r o f I T R o r F - E v a l ua t i on s Fine tuning for -Problem Nineq1 ,Fourth x IterationF-Evaluation .5 β -controls the updating of the trust-region size Among the 45 studied problems, 31 problems are insensitive to different values of β while 14 problems are sensitive for at least one of the initial points. β = 0 . , . arethe best choices. It means that we shouldn’t wait for a great agreement between themodel and the function and should take the opportunity to increase the trust-regionsize. θ -used to ensure strictly feasible iterates Among the 45 studied problems, 28 problems are insensitive to different values of θ while 17 problems are sensitive for at least one of the initial points. Looking atTable 6, θ = 0 . gives the best results. θ = 0 . is also a good choice, it worksas good as the traditional θ = 0 . , so a big truncation of p k shows as goodperformance as small truncation. θ = 0 . is working surprisingly good. It solves theproblem Threeq4a in the least possible iterations. γ -controls the size enlargement of the trust-region Among the 45 studied problems, 21 problems are insensitive to different values of γ while 24 problems are sensitive for at least one of the initial points. Looking at Fig4 in most of the cases γ = 8 , show better performance comparing to γ = 2 . Itseems that taking big steps and going through large trust-region sizes is risky butit worth to take this risk since in most of the situations it shows faster convergence. The problems Twoeq5a, Twoeq5b, Twoeq7, Threeq5, and 11eq1 are nearly sensitiveto changes in the parameters. The sensitive problems that can be affected by differ-ent values of the parameters are: Twoeq 6, Threeq1, Threeq4a, Threeq4b, Threeq6,Threeeq6, Foureq1, Sixeq4b, Seveneq2b, Nineq1 and super sensitive problems thatcan be affected by all of the 7 parameters are Seveneq2b, and Nineq1.The results show that taking big steps by increasing the trust-region size is risky,but it causes faster convergence and it worth to take the risk. Also, we prefer notto cut the trust-region too much, we prefer to stay in the Cauchy direction as muchas possible and take easier criteria to check the agreement between the model andthe merit function.The interesting point is that for some of the problems (Threeq6, Sixeq4b, Sev-eneq2b, and 11eq1) only specific values of β and θ work well. We concentrate on θ θ motivates us to try different values of this parameter before giving up. Thiscan be done by introducing a sequence of parameters as initial values of θ and nota constant one. That means in STRSCNE algorithm the initialization step shouldbe modified as follows:STRSCNE with a variable θ - Initialization:Given x ∈ int (Ω) , ∆ > , θ i = { . . i } i =0 , < α ≤ α < , β ∈ (0 , , <β < β < , Let i = 0 and θ = θ .Steps 1-89. If k F k +1 k > − then repeat the algorithm. If k F k +1 k ≤ − :9.1 If Ierr=0 thenSTOP and report the solution.else set i = i + 1 , θ = θ i and go to step 1 unless i = 12 . It is either convergent to the solution or repeats the algorithm for different val-ues of θ before reporting a defeat.This can be a useful approach if the goal is solving a problem several times in arestricted amount of time. For example Problem Sixeq4b starting from third x and θ = 0 . is convergent to the solution in 9 iterations while for θ = 0 . thenumber of iterations is 350. For the first choice of θ it takes 1.24 seconds and forthe second choice, it takes 41.72 seconds. This happens in several other problemstoo. 10 able 1The Algorithm’s Iterations for the selected values of α First x Problem α = 0 . α = 0 . α = 0 . α = 0 . α = 0 . α = 0 . Twoeq7
Second x Threeq1
42 32 32 32 32 32
Threeq4b foureq1
10 10 10 10 16 15
Third x twoeq5b Twoeq6
10 9 12 9 9 9
Threeq1
31 35 30 32 32 32
Threeq4a
62 54 118 70 62 62
Threeq4b
10 7 7 7 7 7
Foureq1
13 13 11 12 12 12
Sixeq4b
326 301 299 334 334 334
Seveneq2b
117 35 112 84 84 84
Seveneq3a
Fourth x twoeq5a Twoeq6
Threeq4a
51 52 53 54 54 54
Threeq6
100 76 225 192 192 192
Nineq1
20 17 18 15 15 15
Table 2The Algorithm’s Iterations for the selected values of α Second x Problem α = 0 . α = 0 . α = 0 . α = 0 . α = 0 . α = 0 . Twoeq6 14 16 16 16 13 17Threeq1 45 32 32 32 32 32Threeq5 22 18 16 20 19 21foureq1 13 13 10 16 13 12Third x Twoeq6 12 13 14 13 11 10Threeq4a 52 65 65 67 67 67Threeq4b 9 7 7 7 7 7Sixeq4b 321 493 495 496 480 497Seveneq2b 277 252 130 199 165 157Fourth x Twoeq6 6 12 6 6 6 6Threeq4a 51 51 52 59 53 54Nineq1 22 18 20 20 19 20
Table 3The Algorithm’s Iterations for the selected values of β First x Problem β = 0 . β = 0 . β = 0 . β = 0 . β = 0 . Nineq1 200 200 21 21 21Third x Seveneq2b 199 199 199 179 179 able 4The Algorithm’s Iterations for the selected values of β First x Problem β = 0 . β = 0 . β = 0 . β = 0 . β = 0 . Twoeq7 9 10 8 8 8Threeq1 5 5 5 6 6Threeq6 8 8 8 8 25Second x Threeq1 38 40 32 45 58Threeq5 20 20 20 42 93Threeq6 116 303 305 307 -Threeq7 18 13 13 13 13Seveneq2b 17 14 14 14 14Third x twoeq5b 8 8 7 7 7Twoeq6 13 13 13 13 10Threeq1 38 31 24 24 39Threeq2 39 39 39 44 44Threeq4a 44 53 67 64 75foureq1 11 13 13 13 20Sixeq4b 380 361 350 342 337Seveneq2b 207 202 199 206 17811eq1 19 19 19 19 20Fourth x Threeq4a 33 54 59 60 64Threeq6 313 96 95 93 -Nineq1 19 19 20 20 20 able 5The Algorithm’s Iterations for the selected values of θ First x Problem θ = 0 . θ = 0 . θ = 0 . θ = 0 . θ = 0 . θ = 0 . Nineq1 14 12 14 8 51 200Teneq1a 15 14 14 14 14 14Second x Twoeq6 16 17 15 14 14 16Threeq1 42 32 40 32 32 32Threeq5 15 15 15 15 20 20Threeq6 97 122 134 102 319 305Threeq7 11 11 10 12 14 13foureq1 13 11 13 10 13 16Sixeq4b - 10 - 10 9 7Teneq1b 95 80 97 90 79 14Teneq2b 13 12 12 11 11 11Third x Twoeq6 18 15 15 16 12 13Twoeq7 8 8 8 7 7 7Threeq4a 70 54 57 58 67 67Threeq4b 7 6 7 6 6 7foureq1 12 12 13 13 13 13Sixeq4b 9 9 170 260 253 350Seveneq2b 40 153 - 21 41 199Teneq2a 11 11 10 9 9 911eq1 - - - - 22 19Fourth x Threeq4a 47 62 48 51 52 59Sixeq4b 9 8 8 7 7 7Nineq1 24 23 22 22 19 20
Figure 4. Different γ s performance on sensitive problems Sensitive Problems It e r a t i on s First x =2=4=6=8=10 Sensitive Problems It e r a t i on s Second x Sensitive Problems It e r a t i on s Third x Sensitive Problems It e r a t i on s Fourth x eferences [1] Azencott, R., Muravina, V., Hekmati, R., Zhang, W., & Paldino, M. (2019). Automaticclustering in large sets of time series. In Contributions to Partial Differential Equationsand Applications (pp. 65-75). Springer, Cham.[2] Bellavia, S., Macconi, M., & Morini, B. (2003). An affine scaling trust-region approachto bound-constrained nonlinear systems. Applied Numerical Mathematics, 44(3), 257-280.[3] Bellavia, S., Macconi, M., & Morini, B. (2004). STRSCNE: A scaled trust-region solverfor constrained nonlinear equations. Computational Optimization and Applications,28(1), 31-50.[4] Bullard, L. G., & Biegler, L. T. (1991). Iterative linear programming strategies forconstrained simulation. Computers & chemical engineering, 15(4), 239-254.[5] Ferkl, L., & Meinsma, G. (2007). Finding optimal ventilation control for highwaytunnels. Tunnelling and underground space technology, 22(2), 222-229.[6] Hekmati, R. (2016). On efficiency of non-monotone adaptive trust region and scaledtrust region methods in solving nonlinear systems of equations. Biquarterly ResearchJournal of Control and Optimization in applied Mathematics, 1(1), 31-40.[7] Hekmati, R., & Mirhajianmoghadam, H. (2018). Nested performance profiles forbenchmarking software. arXiv preprint arXiv:1809.06270.[8] Hongwei, L. (2008). A Non-monotone Adaptive Trust Region Algorithm for NonlinearEquations for Recurrent Event Data [J]. Acta Mathematicae Applicatae Sinica, 6.[9] Hosseini, M., Islam, R., Kulkarni, A., & Mohsenin, T. (2017, April). A scalable fpga-based accelerator for high-throughput mcmc algorithms. In 2017 IEEE 25th AnnualInternational Symposium on Field-Programmable Custom Computing Machines(FCCM) (pp. 201-201). IEEE.[10] Kanzow, C. (2001). An active set-type Newton method for constrained nonlinearsystems. In Complementarity: applications, algorithms and extensions (pp. 179-200).Springer, Boston, MA.[11] Kozakevich, D. N., Martinez, J. M., & Santos, S. A. (1997). Solving nonlinear systemsof equations with simple constraints. Computational & Applied Mathematics.[12] Layegh, M., Ghodsi, F. E., & Hadipour, H. (2018). Improving the electrochemicalresponse of nanostructured MoO3 electrodes by Co doping: Synthesis andcharacterization. Journal of Physics and Chemistry of Solids, 121, 375-385.[13] Layegh, M., Ghodsi, F. E., & Hadipour, H. (2020). Experimental and theoreticalstudy of Fe doping as a modifying factor in electrochemical behavior of mixed-phasemolybdenum oxide thin films. Applied Physics A, 126(1), 14.[14] Manzacca, G., Cincotti, G., & Hingerl, K. (2007). Ultrafast switching by controllingRabi splitting. Applied Physics Letters, 91(23), 231920.
15] Mousavi, S., Taghiabadi, M. M. R., & Ayanzadeh, R. (2019). A survey on compressivesensing: Classical results and recent advancements. arXiv preprint arXiv:1908.01014.[16] Najarian, M., & Lim, G. J. (2020). Optimizing infrastructure resilience underbudgetary constraint. Reliability Engineering & System Safety, 198, 106801.[17] Najarian, M., Sarmast, Z., Ghasemi, S. M., & Sarmadi, S. (2018). EvolutionaryVertical Size Reduction: A novel Approach for Big Data Computing. rn, 55, 7.[18] Nocedal, J., & Wright, S. (2006). Numerical optimization. Springer Science & BusinessMedia.[19] Pirhooshyaran, M., & Snyder, L. V. (2017, August). Optimization of inventoryand distribution for hip and knee joint replacements via multistage stochasticprogramming. In Modeling and optimization: Theory and applications (pp. 139-155).Springer, Cham.[20] Schweidtmann, A. M., Clayton, A. D., Holmes, N., Bradford, E., Bourne, R. A., &Lapkin, A. A. (2018). Machine learning meets continuous flow chemistry: Automatedoptimization towards the Pareto front of multiple objectives. Chemical EngineeringJournal, 352, 277-282.[21] Shacham, M. (1986). Numerical solution of constrained non-linear algebraic equations.International journal for numerical methods in engineering, 23(8), 1455-1481.[22] Shacham, M., Brauner, N., & Cutlip, M. B. (2002). A web-based library fortesting performance of numerical software for solving nonlinear algebraic equations.Computers & Chemical Engineering, 26(4-5), 547-554.[23] Ulbrich, M. (2001). Nonmonotone trust-region methods for bound-constrainedsemismooth equations with applications to nonlinear mixed complementarityproblems. SIAM Journal on Optimization, 11(4), 889-917.15] Mousavi, S., Taghiabadi, M. M. R., & Ayanzadeh, R. (2019). A survey on compressivesensing: Classical results and recent advancements. arXiv preprint arXiv:1908.01014.[16] Najarian, M., & Lim, G. J. (2020). Optimizing infrastructure resilience underbudgetary constraint. Reliability Engineering & System Safety, 198, 106801.[17] Najarian, M., Sarmast, Z., Ghasemi, S. M., & Sarmadi, S. (2018). EvolutionaryVertical Size Reduction: A novel Approach for Big Data Computing. rn, 55, 7.[18] Nocedal, J., & Wright, S. (2006). Numerical optimization. Springer Science & BusinessMedia.[19] Pirhooshyaran, M., & Snyder, L. V. (2017, August). Optimization of inventoryand distribution for hip and knee joint replacements via multistage stochasticprogramming. In Modeling and optimization: Theory and applications (pp. 139-155).Springer, Cham.[20] Schweidtmann, A. M., Clayton, A. D., Holmes, N., Bradford, E., Bourne, R. A., &Lapkin, A. A. (2018). Machine learning meets continuous flow chemistry: Automatedoptimization towards the Pareto front of multiple objectives. Chemical EngineeringJournal, 352, 277-282.[21] Shacham, M. (1986). Numerical solution of constrained non-linear algebraic equations.International journal for numerical methods in engineering, 23(8), 1455-1481.[22] Shacham, M., Brauner, N., & Cutlip, M. B. (2002). A web-based library fortesting performance of numerical software for solving nonlinear algebraic equations.Computers & Chemical Engineering, 26(4-5), 547-554.[23] Ulbrich, M. (2001). Nonmonotone trust-region methods for bound-constrainedsemismooth equations with applications to nonlinear mixed complementarityproblems. SIAM Journal on Optimization, 11(4), 889-917.