Strong convergence of an inertial Tseng's extragradient algorithm for pseudomonotone variational inequalities with applications to optimal control problems
SStrong convergence of an inertial Tseng’s extragradient algorithm for pseudomonotonevariational inequalities with applications to optimal control problems
Bing Tan a , Xiaolong Qin b, ∗ a Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China, Chengdu, China b Department of Mathematics, Zhejiang Normal University, Zhejiang, China
Abstract
We investigate an inertial viscosity-type Tseng’s extragradient algorithm with a new step size to solve pseudomonotonevariational inequality problems in real Hilbert spaces. A strong convergence theorem of the algorithm is obtained without theprior information of the Lipschitz constant of the operator and also without any requirement of additional projections. Finally,several computational tests are carried out to demonstrate the reliability and benefits of the algorithm and compare it withthe existing ones. Moreover, our algorithm is also applied to solve the variational inequality problem that appears in optimalcontrol problems. The algorithm presented in this paper improves some known results in the literature.
Keywords:
Pseudomonotone variational inequality, Tseng’s extragradient method, inertial method, viscosity method, optimalcontrol problem
1. Introduction
The goal of this study is to investigate a fast iterative method for discovering a solution to the variational inequality problem(in short, VIP). In this paper, one always assumes that H is a real Hilbert space with (cid:104)· , ·(cid:105) and the induced norm (cid:107) · (cid:107) , and C is aclosed and convex nonempty subset in H . Let us first elaborate on the issues involved in this research as follows:find y ∗ ∈ C such that (cid:104)A y ∗ , z − y ∗ (cid:105) ≥ , ∀ z ∈ C , (VIP)where A : H → H is a nonlinear mapping. We denote the solution set of (VIP) as VI( C, A ) .Variational inequalities are powerful tools and models in applied mathematics and act an essential role in society, opti-mization, economics, transportation, mathematical programming, engineering mechanics, and other fields (see, for instance,[1, 2, 3]). In the last decades, various effective solution methods have been investigated and developed to solve the problems oftype (VIP); see, e.g., [4, 5, 6] and the references therein. It should be pointed out that these approaches usually require thatmapping A has certain monotonicity. In this paper, we consider that the mapping A associated with (VIP) is pseudomonotone(see the definition below), which is a broader category than monotone mappings.Let us review some nonlinear mappings in nonlinear analysis for further use. For any elements p, q ∈ H , one recalls that amapping A : H → H is said to be: ∗ Corresponding author
Email addresses: [email protected] (Bing Tan), [email protected] (Xiaolong Qin)
Preprint submitted to Elsevier July 24, 2020 a r X i v : . [ m a t h . O C ] J u l η -strongly monotone if there is a positive number η such that (cid:104)A p − A q, p − q (cid:105) ≥ η (cid:107) p − q (cid:107) , (2) η -inverse strongly monotone if there is a positive number η such that (cid:104)A p − A q, p − q (cid:105) ≥ η (cid:107)A p − A q (cid:107) , (3) monotone if (cid:104)A p − A q, p − q (cid:105) ≥ , (4) η -strongly pseudomonotone if there is a positive number η such that (cid:104)A p, q − p (cid:105) ≥ ⇒ (cid:104)A q, q − p (cid:105) ≥ η (cid:107) p − q (cid:107) , (5) pseudomonotone if (cid:104)A p, q − p (cid:105) ≥ ⇒ (cid:104)A q, q − p (cid:105) ≥ , (6) L -Lipschitz continuous if there is L > such that (cid:107)A p − A q (cid:107) ≤ L (cid:107) p − q (cid:107) , (7) sequentially weakly continuous if for any sequence { p n } weakly converges to a point p ∈ H , {A p n } weakly convergesto A p .It can be easily checked that the following relations: (1) = ⇒ (3) = ⇒ (5) and (1) = ⇒ (4) = ⇒ (5) . Note that the oppositestatement is generally incorrect. Recall that a mapping P C : H → C is called the metric projection from H onto C , if for all x ∈ H , there is a unique nearest point in C , which is represented by P C ( x ) , such that P C x := argmin {(cid:107) x − y (cid:107) , y ∈ C } .The oldest and simplest projection approach to solve variational inequality problems is the projected-gradient method,which reads as follows: x n +1 = P C ( x n − γ A x n ) , ∀ n ≥ , (PGM)where P C represents the metric projection onto C , mapping A is L -Lipschitz continuous and η -strongly monotone and thestep size γ ∈ (0 , η/L ) . Then the iterative sequence { x n } defined by (PGM) converges to the solution of (VIP) provided that VI( C, A ) is nonempty. It should be noted that the iterative sequence { x n } formulated by (PGM) does not necessarily convergewhen mapping A is “only” monotone. Recently, Malitsky [7] introduced a projected reflected gradient method, which can beviewed as an improvement of (PGM). Indeed, the sequence generated by this method is as follows: x n +1 = P C ( x n − γ A (2 x n − x n − )) , ∀ n ≥ . (PRGM)He proved that the sequence { x n } created by iterative scheme (PRGM) converges to u ∈ VI( C, A ) when the mapping A ismonotone. Further extensions of (PRGM) can be found in [8, 9].In many kinds of research on solving variational inequalities controlled by pseudomonotone and Lipschitz continuousoperators, the most commonly used algorithm is the extragradient method (see [10]) and its variants. Indeed, Korpelevich2roposed the extragradient method (EGM) in [10] to find the solution of the saddle point problem in finite-dimensional spaces.The details of EGM are described as follows: y n = P C ( x n − γ A x n ) ,x n +1 = P C ( x n − γ A y n ) , ∀ n ≥ , (EGM)where mapping A is L -Lipschitz continuous monotone and fixed step size γ ∈ (0 , /L ) . Under the condition of VI( C, A ) (cid:54) = ∅ ,the iterative sequence { x n } defined by (EGM) converges to an element of VI( C, A ) . In the past few decades, EGM has beenconsidered and extended by many authors for solving (VIP) in infinite-dimensional spaces, see, e.g., [11, 12, 13] and thereferences therein. Recently, Vuong [14] extended EGM to solve pseudomonotone variational inequalities in Hilbert spaces,and proved that the iterative sequence constructed by the algorithm converges weakly to a solution of (VIP). On the other hand,it is not easy to calculate the projection on the general closed convex set C , especially when C has a complex structure. Notethat in the extragradient method, two projections need to be calculated on the closed convex set C for each iteration, which mayseverely affect the computational performance of the algorithm used.Next, we introduce two types of methods to enhance the numerical efficiency of EGM. The first approach is the Tseng’sextragradient method (referred to as TEGM, also known as the forward-backward-forward method) proposed by Tseng [15].The advantage of this method is that the projection on the feasible set only needs to be calculated once in each iteration. Moreprecisely, TEGM is expressed as follows: y n = P C ( x n − γ A x n ) ,x n +1 = y n − γ ( A y n − A x n ) , ∀ n ≥ , (TEGM)where mapping A is L -Lipschitz continuous monotone and fixed step size γ ∈ (0 , /L ) . Then the iterative sequence { x n } formulated by (TEGM) converges to a solution of (VIP) provided that VI( C, A ) is nonempty. Very recently, Bot, Csetnekand Vuong in their recent work [16] proposed a Tseng’s forward-backward-forward algorithm for solving pseudomonotonevariational inequalities in Hilbert spaces and performed an asymptotic analysis of the formed trajectories. The second methodis the subgradient extragradient method (SEGM) proposed by Censor, Gibali and Reich [17]. This can be regarded as amodification of EGM. Indeed, they replaced the second projection in (EGM) by a projection onto a half-space. SEGM iscalculated as follows: y n = P C ( x n − γ A x n ) ,T n = { x ∈ H | (cid:104) x n − γ A x n − y n , x − y n (cid:105) ≤ } ,x n +1 = P T n ( x n − γ A y n ) , ∀ n ≥ , (SEGM)where mapping A is L -Lipschitz continuous monotone and fixed step size γ ∈ (0 , /L ) . SEGM not only converges to monotonevariational inequalities (see [18]), but also to pseudomonotone variational inequalities (see [19, 20]).It is worth mentioning that (EGM), (TEGM) and (SEGM) are weakly convergent in infinite-dimensional Hilbert spaces.Some practical problems that occur in the fields of image processing, quantum mechanics, medical imaging and machinelearning need to be modeled and analyzed in infinite-dimensional space. Therefore, strong convergence results are preferable toweak convergence results in infinite-dimensional space. Recently, Thong and Vuong [21] introduced the modified Mann-typeTseng’s extragradient method to solve the (VIP) involving a pseudomonotone mapping in Hilbert spaces. Their method uses anArmijo-like line search to eliminate the reliance on the Lipschitz continuous constant of the mapping A . Indeed, the proposed3lgorithm is stated as follows: y n = P C ( x n − γ n A x n ) ,z n = y n − γ n ( A y n − A x n ) ,x n +1 = (1 − ϕ n − τ n ) x n + τ n z n , ∀ n ≥ , (MaTEGM)where the mapping A is pseudomonotone, sequentially weakly continuous on C and uniformly continuous on bounded subsetsof H , { ϕ n } , { τ n } are two real positive sequences in (0 , such that { τ n } ⊂ ( a, − ϕ n ) for some a > and lim n →∞ ϕ n = 0 , (cid:80) ∞ n =1 ϕ n = ∞ , γ n := α(cid:96) q n and q n is the smallest non-negative integer q satisfying α(cid:96) q (cid:107)A x n − A y n (cid:107) ≤ φ (cid:107) x n − y n (cid:107) ( α > , (cid:96) ∈ (0 , , φ ∈ (0 , ). They showed that the iteration scheme formed by (MaTEGM) converges strongly to an element u under VI( C, A ) (cid:54) = ∅ , where u = arg min {(cid:107) z (cid:107) : z ∈ VI( C, A ) } .To accelerate the convergence rate of the algorithms, in 1964, Polyak [22] considered the second-order dynamical system ¨ x ( t ) + γ ˙ x ( t ) + ∇ f ( x ( t )) = 0 , where γ > , ∇ f represents the gradient of f , ˙ x ( t ) and ¨ x ( t ) denote the first and secondderivatives of x at t , respectively. This dynamic system is called the Heavy Ball with Friction (HBF).Next, we consider the discretization of this dynamic system (HBF), that is, x n +1 − x n + x n − h + γ x n − x n − h + ∇ f ( x n ) = 0 , ∀ n ≥ . Through a direct calculation, we can get the following form: x n +1 = x n + τ ( x n − x n − ) − ϕ ∇ f ( x n ) , ∀ n ≥ , where τ = 1 − γh and ϕ = h . This can be considered as the following two-step iteration scheme: y n = x n + τ ( x n − x n − ) ,x n +1 = y n − ϕ ∇ f ( x n ) , ∀ n ≥ . This iteration is now called the inertial extrapolation algorithm, the term τ ( x n − x n − ) is referred to as the extrapolation point.In recent years, inertial technology as an acceleration method has attracted extensive research in the optimization community.Many scholars have built various fast numerical algorithms by employing the inertial technology. These algorithms haveshown advantages in theoretical and computational experiments and have been successfully applied to many problems, see, forinstance, [23, 24, 25] and the references therein.Very recently, inspired by the inertial method, the SEGM and the viscosity method, Thong, Hieu and Rassias [26] presenteda viscosity-type inertial subgradient extragradient algorithm to solve pseudomonotone (VIP) in Hilbert spaces. The algorithm isof the form: s n = x n + δ n ( x n − x n − ) ,y n = P C ( s n − γ n A s n ) ,T n = { x ∈ H | (cid:104) s n − γ n A s n − y n , x − y n (cid:105) ≤ } ,z n = P T n ( s n − γ n A y n ) ,x n +1 = ϕ n f ( z n ) + (1 − ϕ n ) z n , ∀ n ≥ .γ n +1 = min (cid:110) φ (cid:107) s n − y n (cid:107)(cid:107)A s n −A y n (cid:107) , γ n (cid:111) , if A s n − A y n (cid:54) = 0; γ n , otherwise , (ViSEGM)4here the mapping A is pseudomonotone, L -Lipschitz continuous, sequentially weakly continuous on C , and the inertiaparameters δ n are updated in the following ways: δ n = min (cid:26) (cid:15) n (cid:107) x n − x n − (cid:107) , δ (cid:27) , if x n (cid:54) = x n − ; δ, otherwise . Note that the Algorithm (ViSEGM) uses a simple step size rule, which is generated through some calculations of previouslyknown information in each iteration. Therefore, it can work well without the prior information of the Lipschitz constant of themapping A . They confirmed the strong convergence of (ViSEGM) under mild assumptions on cost mapping and parameters.Motivated and stimulated by the above works, we introduce a new inertial Tseng’s extragradient algorithm with a new stepsize for solving the pseudomonotone (VIP) in Hilbert spaces. The advantages of our algorithm are: (1) only one projectionon the feasible set needs to be calculated in each iteration; (2) do not require to know the prior information of the Lipschitzconstant of the cost mapping; (3) the addition of inertial makes it have faster convergence speed. Under mild assumptions, weconfirm a strong convergence theorem of the suggested algorithm. Lastly, some computational tests appearing in finite andinfinite dimensions are proposed to verify our theoretical results. Furthermore, our algorithm is also designed to solve optimalcontrol problems. Our algorithm improves some existing results [21, 26, 27, 28, 29].The organizational structure of our paper is built up as follows. Some essential definitions and technical lemmas that need tobe used are given in the next section. In Section 3, we propose an algorithm and analyze its convergence. Some computationaltests and applications to verify our theoretical results are presented in Section 4. Finally, the paper ends with a brief summary.
2. Preliminaries
Let C be a closed and convex nonempty subset of a real Hilbert space H . The weak convergence and strong convergence of { x n } ∞ n =1 to x are represented by x n (cid:42) x and x n → x , respectively. For each x, y ∈ H and δ ∈ R , we have the followingfacts:(1) (cid:107) x + y (cid:107) ≤ (cid:107) x (cid:107) + 2 (cid:104) y, x + y (cid:105) ;(2) (cid:107) δx + (1 − δ ) y (cid:107) = δ (cid:107) x (cid:107) + (1 − δ ) (cid:107) y (cid:107) − δ (1 − δ ) (cid:107) x − y (cid:107) .It is known that P C x has the following basic properties: • (cid:104) x − P C x, y − P C x (cid:105) ≤ , ∀ y ∈ C ; • (cid:107) P C x − P C y (cid:107) ≤ (cid:104) P C x − P C y, x − y (cid:105) , ∀ y ∈ H ; • (cid:107) x − P C ( x ) (cid:107) ≤ (cid:107) x − y (cid:107) − (cid:107) y − P C ( x ) (cid:107) , ∀ y ∈ C .We give some explicit formulas to calculate projections on special feasible sets.(1) The projection of x onto a half-space H u,v = { x : (cid:104) u, x (cid:105) ≤ v } is given by P H u,v ( x ) = x − max { [ (cid:104) u, x (cid:105) − v ] / (cid:107) u (cid:107) , } u . (2) The projection of x onto a box Box[ a, b ] = { x : a ≤ x ≤ b } is given by P Box[ a,b ] ( x ) i = min { b i , max { x i , a i }} .
53) The projection of x onto a ball B [ p, q ] = { x : (cid:107) x − p (cid:107) ≤ q } is given by P B [ p,q ] ( x ) = p + q max {(cid:107) x − p (cid:107) , q } ( x − p ) . The following lemmas play an important role in our proof.
Lemma 2.1 ([30]) . Assume that C is a closed and convex subset of a real Hilbert space H . Let operator A : C → H becontinuous and pseudomonotone. Then, x ∗ is a solution of (VIP) if and only if (cid:104)A x, x − x ∗ (cid:105) ≥ , ∀ x ∈ C . Lemma 2.2 ([31]) . Let { p n } be a positive sequence, { q n } be a sequence of real numbers, and { σ n } be a sequence in (0 , such that (cid:80) ∞ n =1 σ n = ∞ . Assume that p n +1 ≤ (1 − σ n ) p n + σ n q n , ∀ n ≥ . If lim sup k →∞ q n k ≤ for every subsequence { p n k } of { p n } satisfying lim inf k →∞ ( p n k +1 − p n k ) ≥ , then lim n →∞ p n =0 .
3. Main results
In this section, we present a self adaptive inertial viscosity-type Tseng’s extragradient algorithm, which is based on theinertial method, the viscosity method and the Tseng’s extragradient method. The major benefit of this algorithm is that thestep size is automatically updated at each iteration without performing any line search procedure. Moreover, our iterativescheme only needs to calculate the projection once in each iteration. Before starting to state our main result, we assume that ouralgorithm satisfies the following five assumptions.(C1) The feasible set C is closed, convex and nonempty.(C2) The solution set of the (VIP) is nonempty, that is, VI( C, A ) (cid:54) = ∅ .(C3) The mapping A : H → H is pseudomonotone and L -Lipschitz continuous on H , and sequentially weakly continuous on C .(C4) The mapping f : H → H is ρ -contractive with ρ ∈ [0 , .(C5) The positive sequence { (cid:15) n } satisfies lim n →∞ (cid:15) n ϕ n = 0 , where { ϕ n } ⊂ (0 , such that lim n →∞ ϕ n = 0 and (cid:80) ∞ n =1 ϕ n = ∞ .Now, we can state the details of the iterative method. Our algorithm is described as follows. Remark 3.1.
It follows from (3.1) and Assumption (C5) that lim n →∞ δ n ϕ n (cid:107) x n − x n − (cid:107) = 0 . Indeed, we obtain δ n (cid:107) x n − x n − (cid:107) ≤ (cid:15) n , ∀ n , which together with lim n →∞ (cid:15) n ϕ n = 0 yields lim n →∞ δ n ϕ n (cid:107) x n − x n − (cid:107) ≤ lim n →∞ (cid:15) n ϕ n = 0 . lgorithm 1 Self adaptive inertial viscosity-type Tseng’s extragradient algorithm
Initialization:
Given δ > , γ > , φ ∈ (0 , . Let x , x ∈ H be two initial points. Iterative Steps : Calculate the next iteration point x n +1 as follows: s n = x n + δ n ( x n − x n − ) ,y n = P C ( s n − γ n A s n ) ,z n = y n − γ n ( A y n − A s n ) ,x n +1 = ϕ n f ( z n ) + (1 − ϕ n ) z n . where { δ n } and { γ n } are updated by (3.1) and (3.2), respectively. δ n = min (cid:26) (cid:15) n (cid:107) x n − x n − (cid:107) , δ (cid:27) , if x n (cid:54) = x n − ; δ, otherwise . (3.1) γ n +1 = min (cid:26) φ (cid:107) s n − y n (cid:107)(cid:107)A s n − A y n (cid:107) , γ n (cid:27) , if A s n − A y n (cid:54) = 0; γ n , otherwise . (3.2) Lemma 3.1.
The sequence { γ n } formulated by (3.2) is nonincreasing and lim n →∞ γ n = γ ≥ min (cid:110) γ , φL (cid:111) . Proof.
On account of (3.2), we have γ n +1 ≤ γ n , ∀ n ∈ N . Hence, { γ n } is nonincreasing. Moreover, we get that (cid:107)A s n − A y n (cid:107) ≤ L (cid:107) s n − y n (cid:107) by means of A is L -Lipschitz continuous. Thus, φ (cid:107) s n − y n (cid:107)(cid:107)A s n − A y n (cid:107) ≥ φL , if A s n (cid:54) = A y n , which together with (3.2) implies that γ n ≥ min { γ , φL } . Therefore, lim n →∞ γ n = γ ≥ min (cid:8) γ , φL (cid:9) since sequence { γ n } islower bounded and nonincreasing.The following lemmas have a significant part to play in the convergence proof of our algorithm. Lemma 3.2.
Suppose that Assumptions (C1)–(C3) hold. Let { s n } and { y n } be two sequences formulated by Algorithm 1. Ifthere exists a subsequence { s n k } convergent weakly to z ∈ H and lim k →∞ (cid:107) s n k − y n k (cid:107) = 0 , then z ∈ VI( C, A ) .Proof. From the property of projection and y n = P C ( s n − γ n A s n ) , we have (cid:104) s n k − γ n k A s n k − y n k , x − y n k (cid:105) ≤ , ∀ x ∈ C , which can be written as follows γ n k (cid:104) s n k − y n k , x − y n k (cid:105) ≤ (cid:104)A s n k , x − y n k (cid:105) , ∀ x ∈ C .
Through a direct calculation, we get γ n k (cid:104) s n k − y n k , x − y n k (cid:105) + (cid:104)A s n k , y n k − s n k (cid:105) ≤ (cid:104)A s n k , x − s n k (cid:105) , ∀ x ∈ C . (3.3)7e have that { s n k } is bounded since { s n k } is convergent weakly to z ∈ H . Then, from the Lipschitz continuity of A and (cid:107) s n k − y n k (cid:107) → , we obtain that {A s n k } and { y n k } are also bounded. Since γ n k ≥ min { γ , φL } , one concludes from (3.3)that lim inf k →∞ (cid:104)A s n k , x − s n k (cid:105) ≥ , ∀ x ∈ C . (3.4)Moreover, one has (cid:104)A y n k , x − y n k (cid:105) = (cid:104)A y n k − A s n k , x − s n k (cid:105) + (cid:104)A s n k , x − s n k (cid:105) + (cid:104)A y n k , s n k − y n k (cid:105) . (3.5)Since lim k →∞ (cid:107) s n k − y n k (cid:107) = 0 and A is Lipschitz continuous, we get lim k →∞ (cid:107)A s n k − A y n k (cid:107) = 0 . This together with(3.4) and (3.5) yields that lim inf k →∞ (cid:104)A y n k , x − y n k (cid:105) ≥ .Next, we select a positive number decreasing sequence { ζ k } such that ζ k → as k → ∞ . For any k , we represent thesmallest positive integer with N k such that (cid:104)A y n j , x − y n j (cid:105) + ζ k ≥ , ∀ j ≥ N k . (3.6)It can be easily seen that the sequence { N k } is increasing because { ζ k } is decreasing. Moreover, for any k , from { y N k } ⊂ C ,we can assume A y N k (cid:54) = 0 (otherwise, y N k is a solution) and set u N k = A y N k / (cid:107)A y N k (cid:107) . Then, we get (cid:104)A y N k , u N k (cid:105) = 1 , ∀ k .Now, we can deduce from (3.6) that (cid:104)A y N k , x + ζ k u N k − y N k (cid:105) ≥ , ∀ k . According to the fact that A is pseudomonotone on H , we can show that (cid:104)A ( x + ζ k u N k ) , x + ζ k u N k − y N k (cid:105) ≥ , which further yields that (cid:104)A x, x − y N k (cid:105) ≥ (cid:104)A x − A ( x + ζ k u N k ) , x + ζ k u N k − y N k (cid:105) − ζ k (cid:104)A x, u N k (cid:105) . (3.7)Now, we prove that lim k →∞ ζ k u N k = 0 . We get that y N k (cid:42) z since s n k (cid:42) z and lim k →∞ (cid:107) s n k − y n k (cid:107) = 0 . From { y n } ⊂ C ,we have z ∈ C . In view of A is sequentially weakly continuous on C , one has that {A y n k } converges weakly to A z . Oneassumes that A z (cid:54) = 0 (otherwise, z is a solution). According to the fact that norm mapping is sequentially weakly lowersemicontinuous, we obtain < (cid:107)A z (cid:107) ≤ lim inf k →∞ (cid:107)A y n k (cid:107) . Using { y N k } ⊂ { y n k } and ζ k → as k → ∞ , we have ≤ lim sup k →∞ (cid:107) ζ k u N k (cid:107) = lim sup k →∞ (cid:16) ζ k (cid:107)A y n k (cid:107) (cid:17) ≤ lim sup k →∞ ζ k lim inf k →∞ (cid:107)A y n k (cid:107) = 0 . That is, lim k →∞ ζ k u N k = 0 . Thus, from the facts that A is Lipschitz continuous, sequences { y N k } and { u N k } are boundedand lim k →∞ ζ k u N k = 0 , we can conclude from (3.7) that lim inf k →∞ (cid:104)A x, x − y N k (cid:105) ≥ . Therefore, (cid:104)A x, x − z (cid:105) = lim k →∞ (cid:104)A x, x − y N k (cid:105) = lim inf k →∞ (cid:104)A x, x − y N k (cid:105) ≥ , ∀ x ∈ C .
Consequently, we observe that z ∈ VI( C, A ) by Lemma 2.1. This completes the proof. Remark 3.2. If A is monotone, then A does not need to satisfy sequential weak continuity, see [32]. Lemma 3.3.
Suppose that Assumptions (C1)–(C3) hold. Let sequences { z n } and { y n } be formulated by Algorithm 1. Then,we have (cid:107) z n − u (cid:107) ≤ (cid:107) s n − u (cid:107) − (cid:16) − φ γ n γ n +1 (cid:17) (cid:107) s n − y n (cid:107) , ∀ u ∈ VI( C, A ) , and (cid:107) z n − y n (cid:107) ≤ φ γ n γ n +1 (cid:107) s n − y n (cid:107) . roof. First, using the definition of { γ n } , one obtains (cid:107)A s n − A y n (cid:107) ≤ φγ n +1 (cid:107) s n − y n (cid:107) , ∀ n . (3.8)Indeed, if A s n = A y n then (3.8) clearly holds. Otherwise, it follows from (3.2) that γ n +1 = min (cid:26) φ (cid:107) s n − y n (cid:107)(cid:107)A s n − A y n (cid:107) , γ n (cid:27) ≤ φ (cid:107) s n − y n (cid:107)(cid:107)A s n − A y n (cid:107) . Consequently, we have (cid:107)A s n − A y n (cid:107) ≤ φγ n +1 (cid:107) s n − y n (cid:107) . Therefore, inequality (3.8) holds when A s n = A y n and A s n (cid:54) = A y n . From the definition of z n , one sees that (cid:107) z n − u (cid:107) = (cid:107) y n − γ n ( A y n − A s n ) − u (cid:107) = (cid:107) y n − u (cid:107) + γ n (cid:107)A y n − A s n (cid:107) − γ n (cid:104) y n − u, A y n − A s n (cid:105) = (cid:107) s n − u (cid:107) + (cid:107) y n − s n (cid:107) + 2 (cid:104) y n − s n , s n − u (cid:105) + γ n (cid:107)A y n − A s n (cid:107) − γ n (cid:104) y n − u, A y n − A s n (cid:105) = (cid:107) s n − u (cid:107) + (cid:107) y n − s n (cid:107) − (cid:104) y n − s n , y n − s n (cid:105) + 2 (cid:104) y n − s n , y n − u (cid:105) + γ n (cid:107)A y n − A s n (cid:107) − γ n (cid:104) y n − u, A y n − A s n (cid:105) = (cid:107) s n − u (cid:107) − (cid:107) y n − s n (cid:107) + 2 (cid:104) y n − s n , y n − u (cid:105) + γ n (cid:107)A y n − A s n (cid:107) − γ n (cid:104) y n − u, A y n − A s n (cid:105) . (3.9)Since y n = P C ( s n − γ n A s n ) , using the property of projection, we obtain (cid:104) y n − s n + γ n A s n , y n − u (cid:105) ≤ , or equivalently (cid:104) y n − s n , y n − u (cid:105) ≤ − γ n (cid:104)A s n , y n − u (cid:105) . (3.10)From (3.8), (3.9) and (3.10), we have (cid:107) z n − u (cid:107) ≤(cid:107) s n − u (cid:107) − (cid:107) y n − s n (cid:107) − γ n (cid:104)A s n , y n − u (cid:105) + φ γ n γ n +1 (cid:107) s n − y n (cid:107) − γ n (cid:104) y n − u, A y n − A s n (cid:105)≤(cid:107) s n − u (cid:107) − (cid:16) − φ γ n γ n +1 (cid:17) (cid:107) s n − y n (cid:107) − γ n (cid:104) y n − u, A y n (cid:105) . (3.11)From u ∈ VI( C, A ) , one has (cid:104)A u, y n − u (cid:105) ≥ . Using the pseudomonotonicity of A , we get (cid:104)A y n , y n − u (cid:105) ≥ . (3.12)Combining (3.11) and (3.12), we can show that (cid:107) z n − u (cid:107) ≤ (cid:107) s n − u (cid:107) − (cid:16) − φ γ n γ n +1 (cid:17) (cid:107) s n − y n (cid:107) . According to the definition of z n and (3.8), we obtain (cid:107) z n − y n (cid:107) ≤ φ γ n γ n +1 (cid:107) s n − y n (cid:107) . This completes the proof of the Lemma 3.3. 9 heorem 3.1.
Suppose that Assumptions (C1)–(C5) hold. Then the iterative sequence { x n } formulated by Algorithm 1converges to u ∈ VI( C, A ) in norm, where u = P VI( C, A ) ◦ f ( u ) .Proof. Claim 1.
The sequence { x n } is bounded. According to Lemma 3.3, we get that lim n →∞ (cid:0) − φ γ n γ n +1 (cid:1) = 1 − φ > .Therefore, there is a constant n ∈ N that satisfies − φ γ n γ n +1 > , ∀ n ≥ n . From Lemma 3.3, one has (cid:107) z n − u (cid:107) ≤ (cid:107) s n − u (cid:107) , ∀ n ≥ n . (3.13)By the definition of s n , one sees that (cid:107) s n − u (cid:107) = (cid:107) x n + δ n ( x n − x n − ) − u (cid:107)≤ (cid:107) x n − u (cid:107) + δ n (cid:107) x n − x n − (cid:107) = (cid:107) x n − u (cid:107) + ϕ n · δ n ϕ n (cid:107) x n − x n − (cid:107) . (3.14)From Remark 3.1, one gets δ n ϕ n (cid:107) x n − x n − (cid:107) → . Thus, there is a constant Q > that satisfies δ n ϕ n (cid:107) x n − x n − (cid:107) ≤ Q , ∀ n ≥ . (3.15)Using (3.13), (3.14) and (3.15), we obtain (cid:107) z n − u (cid:107) ≤ (cid:107) s n − u (cid:107) ≤ (cid:107) x n − u (cid:107) + ϕ n Q , ∀ n ≥ n . (3.16)Using the definition of { x n +1 } and (3.16), we have (cid:107) x n +1 − u (cid:107) = (cid:107) ϕ n f ( z n ) + (1 − ϕ n ) z n − u (cid:107)≤ ϕ n (cid:107) f ( z n ) − f ( u ) (cid:107) + ϕ n (cid:107) f ( u ) − u (cid:107) + (1 − ϕ n ) (cid:107) z n − u (cid:107)≤ ϕ n ρ (cid:107) z n − u (cid:107) + ϕ n (cid:107) f ( u ) − u (cid:107) + (1 − ϕ n ) (cid:107) z n − u (cid:107) = (1 − (1 − ρ ) ϕ n ) (cid:107) z n − u (cid:107) + ϕ n (cid:107) f ( u ) − u (cid:107)≤ (1 − (1 − ρ ) ϕ n ) (cid:107) x n − u (cid:107) + ϕ n Q + ϕ n (cid:107) f ( u ) − u (cid:107) = (1 − (1 − ρ ) ϕ n ) (cid:107) x n − u (cid:107) + (1 − ρ ) ϕ n Q + (cid:107) f ( u ) − u (cid:107) − ρ ≤ max (cid:110) (cid:107) x n − u (cid:107) , Q + (cid:107) f ( u ) − u (cid:107) − ρ (cid:111) ≤ · · · ≤ max (cid:110) (cid:107) x n − u (cid:107) , Q + (cid:107) f ( u ) − u (cid:107) − ρ (cid:111) , ∀ n ≥ n . That is, { x n } is bounded. We have that { s n } , { z n } and { f ( z n ) } are also bounded. Claim 2. (cid:16) − φ γ n γ n +1 (cid:17) (cid:107) s n − y n (cid:107) ≤ (cid:107) x n − u (cid:107) − (cid:107) x n +1 − u (cid:107) + ϕ n Q for some Q > . Indeed, it follows from (3.16) that (cid:107) s n − u (cid:107) ≤ ( (cid:107) x n − u (cid:107) + ϕ n Q ) = (cid:107) x n − u (cid:107) + ϕ n (2 Q (cid:107) x n − u (cid:107) + ϕ n Q ) ≤ (cid:107) x n − u (cid:107) + ϕ n Q (3.17)10or some Q > . Combining Lemma 3.3 and (3.17), we see that (cid:107) x n +1 − u (cid:107) ≤ ϕ n (cid:107) f ( z n ) − u (cid:107) + (1 − ϕ n ) (cid:107) z n − u (cid:107) ≤ ϕ n ( (cid:107) f ( z n ) − f ( u ) (cid:107) + (cid:107) f ( u ) − u (cid:107) ) + (1 − ϕ n ) (cid:107) z n − u (cid:107) ≤ ϕ n ( (cid:107) z n − u (cid:107) + (cid:107) f ( u ) − u (cid:107) ) + (1 − ϕ n ) (cid:107) z n − u (cid:107) = ϕ n (cid:107) z n − u (cid:107) + (1 − ϕ n ) (cid:107) z n − u (cid:107) + ϕ n ( (cid:107) f ( u ) − u (cid:107) + 2 (cid:107) z n − u (cid:107) · (cid:107) f ( u ) − u (cid:107) ) ≤ (cid:107) z n − u (cid:107) + ϕ n Q ≤ (cid:107) s n − u (cid:107) − (cid:16) − φ γ n γ n +1 (cid:17) (cid:107) s n − y n (cid:107) + ϕ n Q ≤ (cid:107) x n − u (cid:107) − (cid:16) − φ γ n γ n +1 (cid:17) (cid:107) s n − y n (cid:107) + ϕ n Q (3.18)where Q := Q + Q . Therefore, we obtain (cid:16) − φ γ n γ n +1 (cid:17) (cid:107) s n − y n (cid:107) ≤ (cid:107) x n − u (cid:107) − (cid:107) x n +1 − u (cid:107) + ϕ n Q . Claim 3. (cid:107) x n +1 − u (cid:107) ≤ (1 − (1 − ρ ) ϕ n ) (cid:107) x n − u (cid:107) + (1 − ρ ) ϕ n · (cid:104) Q − ρ · δ n ϕ n (cid:107) x n − x n − (cid:107) + 21 − ρ (cid:104) f ( u ) − u, x n +1 − u (cid:105) (cid:105) , ∀ n ≥ n . for some Q > . Using the definition of s n , we can show that (cid:107) s n − u (cid:107) = (cid:107) x n + δ n ( x n − x n − ) − u (cid:107) ≤ (cid:107) x n − u (cid:107) + 2 δ n (cid:107) x n − u (cid:107)(cid:107) x n − x n − (cid:107) + δ n (cid:107) x n − x n − (cid:107) ≤ (cid:107) x n − u (cid:107) + 3 Qδ n (cid:107) x n − x n − (cid:107) , (3.19)where Q := sup n ∈ N {(cid:107) x n − u (cid:107) , δ (cid:107) x n − x n − (cid:107)} > . Using (3.13) and (3.19), we get (cid:107) x n +1 − u (cid:107) = (cid:107) ϕ n f ( z n ) + (1 − ϕ n ) z n − u (cid:107) = (cid:107) ϕ n ( f ( z n ) − f ( u )) + (1 − ϕ n ) ( z n − u ) + ϕ n ( f ( u ) − u ) (cid:107) ≤ (cid:107) ϕ n ( f ( z n ) − f ( u )) + (1 − ϕ n ) ( z n − u ) (cid:107) + 2 ϕ n (cid:104) f ( u ) − u, x n +1 − u (cid:105)≤ ϕ n (cid:107) f ( z n ) − f ( u ) (cid:107) + (1 − ϕ n ) (cid:107) z n − u (cid:107) + 2 ϕ n (cid:104) f ( u ) − u, x n +1 − u (cid:105)≤ ϕ n ρ (cid:107) z n − u (cid:107) + (1 − ϕ n ) (cid:107) z n − u (cid:107) + 2 ϕ n (cid:104) f ( u ) − u, x n +1 − u (cid:105)≤ (1 − (1 − ρ ) ϕ n ) (cid:107) z n − u (cid:107) + 2 ϕ n (cid:104) f ( u ) − u, x n +1 − u (cid:105)≤ (1 − (1 − ρ ) ϕ n ) (cid:107) x n − u (cid:107) + (1 − ρ ) ϕ n · (cid:104) Q − ρ · δ n ϕ n (cid:107) x n − x n − (cid:107) + 21 − ρ (cid:104) f ( u ) − u, x n +1 − u (cid:105) (cid:105) , ∀ n ≥ n . (3.20) Claim 4. {(cid:107) x n − u (cid:107) } converges to zero. From Lemma 2.2 and Remark 3.1, it remains to show that lim sup k →∞ (cid:104) f ( u ) − u, x n k +1 − u (cid:105) ≤ for any subsequence {(cid:107) x n k − u (cid:107)} of {(cid:107) x n − u (cid:107)} satisfies lim inf k →∞ (cid:0) (cid:107) x n k +1 − u (cid:107) − (cid:107) x n k − u (cid:107) (cid:1) ≥ .For this purpose, we assume that {(cid:107) x n k − u (cid:107)} is a subsequence of {(cid:107) x n − u (cid:107)} such that lim inf k →∞ ( (cid:107) x n k +1 − u (cid:107) − (cid:107) x n k − u (cid:107) ) ≥ . lim k →∞ inf (cid:0) (cid:107) x n k +1 − u (cid:107) − (cid:107) x n k − u (cid:107) (cid:1) = lim inf k →∞ (cid:2) ( (cid:107) x n k +1 − u (cid:107) − (cid:107) x n k − u (cid:107) )( (cid:107) x n k +1 − u (cid:107) + (cid:107) x n k − u (cid:107) ) (cid:3) ≥ . It follows from Claim 2 and Assumption (C5) that lim sup k →∞ (cid:0) − φ γ n k γ n k +1 (cid:1) (cid:107) s n k − y n k (cid:107) ≤ lim sup k →∞ (cid:2) (cid:107) x n k − u (cid:107) − (cid:107) x n k +1 − u (cid:107) (cid:3) + lim sup k →∞ ϕ n k Q = − lim inf k →∞ (cid:2) (cid:107) x n k +1 − u (cid:107) − (cid:107) x n k − u (cid:107) (cid:3) ≤ , which yields that lim k →∞ (cid:107) s n k − y n k (cid:107) = 0 . From Lemma 3.3, we obtain lim k →∞ (cid:107) z n k − y n k (cid:107) = 0 . Hence, lim k →∞ (cid:107) z n k − s n k (cid:107) = 0 .Moreover, using Remark 3.1 and Assumption (C5), we have (cid:107) x n k − s n k (cid:107) = δ n k (cid:107) x n k − x n k − (cid:107) = ϕ n k · δ n k ϕ n k (cid:107) x n k − x n k − (cid:107) → , and (cid:107) x n k +1 − z n k (cid:107) = ϕ n k (cid:107) z n k − f ( z n k ) (cid:107) → . Therefore, we conclude that (cid:107) x n k +1 − x n k (cid:107) ≤ (cid:107) x n k +1 − z n k (cid:107) + (cid:107) z n k − s n k (cid:107) + (cid:107) s n k − x n k (cid:107) → . (3.21)Since { x n k } is bounded, one asserts that there is a subsequence { x n kj } of { x n k } that satisfies x n kj (cid:42) q . Furthermore, lim sup k →∞ (cid:104) f ( u ) − u, x n k − u (cid:105) = lim j →∞ (cid:104) f ( u ) − u, x n kj − u (cid:105) = (cid:104) f ( u ) − u, q − u (cid:105) . (3.22)We get s n k (cid:42) q since (cid:107) x n k − s n k (cid:107) → . This together with lim k →∞ (cid:107) s n k − y n k (cid:107) = 0 and Lemma 3.2 obtains q ∈ VI( C, A ) .By the definition of u = P VI( C, A ) ◦ f ( u ) and (3.22), we infer that lim sup k →∞ (cid:104) f ( u ) − u, x n k − u (cid:105) = (cid:104) f ( u ) − u, q − u (cid:105) ≤ . (3.23)Combining (3.21) and (3.23), we see that lim sup k →∞ (cid:104) f ( u ) − u, x n k +1 − u (cid:105) ≤ lim sup k →∞ (cid:104) f ( u ) − u, x n k − u (cid:105) ≤ . (3.24)Thus, from Remark 3.1, (3.24), Claim 3 and Lemma 2.2, we conclude that x n → u . The proof of the Theorem 3.1 is nowcomplete.If inertial parameter δ n = 0 in Algorithm 1, we have the following result. Corollary 3.1.
Assume that mapping A : H → H is L -Lipschitz continuous pseudomonotone on H and sequentially weaklycontinuous on C . Let mapping f : H → H be ρ -contractive with ρ ∈ [0 , . Given γ > , { ϕ n } ⊂ (0 , satisfies im n →∞ ϕ n = 0 and (cid:80) ∞ n =1 ϕ n = ∞ . Let x be the initial point and { x n } be the sequence generated by y n = P C ( x n − γ n A x n ) ,z n = y n − γ n ( A y n − A x n ) ,x n +1 = ϕ n f ( z n ) + (1 − ϕ n ) z n , (3.25) where step size { γ n } is updated through (3.2) . Then the iterative sequence { x n } formulated by Algorithm (3.25) converges to u ∈ VI( C, A ) in norm, where u = P VI( C, A ) ◦ f ( u ) . Remark 3.3.
It should be pointed out that Algorithm (3.25) improves and summarizes [27, Algorithm 3] and [28, Algorithm 1].Moreover, our algorithm is to solve pseudomonotone (VIP) , while [27] and [28] are to solve monotone (VIP) . We know that theclasses of pseudomonotone mappings cover the classes of monotone mappings. Therefore, our algorithm is more applicable.
4. Numerical examples
In this section, we give some computational tests and applications to show the numerical behavior of our algorithm, and alsoto compare it with some strong convergent algorithms (Algorithms (MaTEGM) and (ViSEGM)). It should be emphasized thatall algorithms can work without the prior information of the Lipschitz constant of the mapping. We use the FOM Solver [33]to effectively calculate the projections onto C and T n . All the programs are implemented in MATLAB 2018a on a personalcomputer. The parameters are chosen as follows: • φ = 0 . , γ = 1 , δ = 0 . , (cid:15) n = 1 / ( n + 1) , ϕ n = 1 / ( n + 1) , f ( x ) = 0 . x for the proposed Algorithm 1 and theAlgorithm (ViSEGM); • α = (cid:96) = 0 . , φ = 0 . , ϕ n = 1 / ( n + 1) , τ n = 0 . − ϕ n ) for the Algorithm (MaTEGM).In our numerical examples, when the number of iterations is the same, we use the runtime in seconds to measurethe computational performance of all algorithms. In the situation, if the solution x ∗ of our problem is known, we take E ( x ) = (cid:107) x − x ∗ (cid:107) to represent the behavior of all algorithms. Otherwise, according to the feature of solutions to (VIP), we usethe sequences D n = (cid:107) x n − x n − (cid:107) and E n = (cid:107) s n − P C ( s n − γ n s n ) (cid:107) to study the performance of all algorithms. Note that, if (cid:107) E n (cid:107) → , then x n can be regards as an approximate solution of (VIP). Example 4.1.
Let A : R m → R m ( m = 5 , , , be an operator given by A ( x ) = 1 (cid:107) x (cid:107) + 1 argmin y ∈ R m (cid:110) (cid:107) y (cid:107) (cid:107) x − y (cid:107) (cid:111) . We emphasize that the operator A is not monotone. However, the operator A is Lipschitz continuous pseudomonotone (see [34]).In this example, we choose the feasible set is a box constraint C = [ − , m . Take initial values x = x are randomlygenerated by rand(m,1) in MATLAB. The maximum iteration as a common stopping criterion. For the four differentdimensions of the operator A , the numerical results are presented in Figs. 1–4. -8 -7 -6 -5 -4 -3 -2 -1 ViTEGMViSEGMMaTEGM -7 -6 -5 -4 -3 -2 -1 ViTEGMViSEGMMaTEGM
Figure 1: Numerical results for Example 4.1 ( m = 5 ) -7 -6 -5 -4 -3 -2 -1 ViTEGMViSEGMMaTEGM -6 -5 -4 -3 -2 -1 ViTEGMViSEGMMaTEGM
Figure 2: Numerical results for Example 4.1 ( m = 10 ) -6 -5 -4 -3 -2 -1 ViTEGMViSEGMMaTEGM -6 -5 -4 -3 -2 -1 ViTEGMViSEGMMaTEGM
Figure 3: Numerical results for Example 4.1 ( m = 15 ) -7 -6 -5 -4 -3 -2 -1 ViTEGMViSEGMMaTEGM -6 -5 -4 -3 -2 -1 ViTEGMViSEGMMaTEGM
Figure 4: Numerical results for Example 4.1 ( m = 20 ) Example 4.2.
In the second example, we consider the form of linear operator A : R m → R m ( m = 5 , , , ) as follows: A ( x ) = Gx + g , where g ∈ R m and G = BB T + M + E , matrix B ∈ R m × m , matrix M ∈ R m × m is skew-symmetric, andmatrix E ∈ R m × m is diagonal matrix whose diagonal terms are non-negative (hence G is positive symmetric definite). Wechoose the feasible set as C = { x ∈ R m : − ≤ x i ≤ , i = 1 , . . . , m } . We get that mapping A is strongly pseudomonotoneand Lipschitz continuous. In this numerical example, both B, M entries are randomly created in [ − , , E is generatedrandomly in [0 , and g = . It can be easily seen that the solution to the problem is x ∗ = { } . The maximum iteration as a common stopping criterion and the initial values x = x are randomly generated by rand(m,1) in MATLAB. Thenumerical results with elapsed time are described in Fig. 5. Example 4.3.
Finally, we focus on a case in Hilbert space H = L [0 , with inner product (cid:104) x, y (cid:105) = (cid:90) x ( t ) y ( t )d t , and norm (cid:107) x (cid:107) = ( (cid:90) x ( t ) d t ) / . Let b and B be two positive numbers such that B/ ( m + 1) < b/m < b < B for some m > . We select the feasible set as C = { x ∈ H : (cid:107) x (cid:107) ≤ b } . The operator A : H → H is of the form A ( x ) = ( B − (cid:107) x (cid:107) ) x, ∀ x ∈ H .
It should be pointed out that operator A is not monotone. Indeed, take a particular pair ( x ‡ , mx ‡ ) , we pick x ‡ ∈ C to satisfy B/ ( m + 1) < (cid:107) x ‡ (cid:107) < b/m , one can sees that m (cid:107) x ‡ (cid:107) ∈ C . By a simple operation, we get (cid:104)A ( x ‡ ) − A ( y ‡ ) , x ‡ − y ‡ (cid:105) = (1 − m ) (cid:107) x ‡ (cid:107) ( B − (1 + m ) (cid:107) x ‡ (cid:107) ) < . Hence, the operator A is not monotone on C . Next, we show that A is pseudomonotone. Indeed, one assumes that (cid:104)A ( x ) , y − -25 -20 -15 -10 -5 ViTEGMViSEGMMaTEGM (a) m = 5 -20 -15 -10 -5 ViTEGMViSEGMMaTEGM (b) m = 10 -20 -15 -10 -5 ViTEGMViSEGMMaTEGM (c) m = 15 -15 -10 -5 ViTEGMViSEGMMaTEGM (d) m = 20 Figure 5: Numerical results for Example 4.2 (cid:105) ≥ , ∀ x, y ∈ C , that is, (cid:104) ( B − (cid:107) x (cid:107) ) x, y − x (cid:105) ≥ . From (cid:107) x (cid:107) < B , we get that (cid:104) x, y − x (cid:105) ≥ . Therefore, we can show that (cid:104)A ( y ) , y − x (cid:105) = (cid:104) ( B − (cid:107) y (cid:107) ) y, y − x (cid:105)≥ ( B − (cid:107) y (cid:107) )( (cid:104) y, y − x (cid:105) − (cid:104) x, y − x (cid:105) )= ( B − (cid:107) y (cid:107) ) (cid:107) y − x (cid:107) ≥ . For the experiment, we take B = 1 . , b = 1 , m = 1 . . We know that the solution to the problem is x ∗ ( t ) = 0 . The maximumiteration as the stopping criterion. Fig. 6 shows the behaviors of function E n = (cid:107) x n ( t ) − x ∗ ( t ) (cid:107) formulated by allalgorithms with four initial points x ( t ) = x ( t ) (Case I: x ( t ) = t , Case II: x ( t ) = cos( t ) , Case III: x ( t ) = sin(2 t ) andCase IV: x ( t ) = 2 t ). -7 -6 -5 -4 -3 -2 -1 ViTEGMViSEGMMaTEGM (a) Starting points x ( t ) = t -7 -6 -5 -4 -3 -2 -1 ViTEGMViSEGMMaTEGM (b) Starting points x ( t ) = cos( t ) -7 -6 -5 -4 -3 -2 -1 ViTEGMViSEGMMaTEGM (c) Starting points x ( t ) = sin(2 t ) -8 -6 -4 -2 ViTEGMViSEGMMaTEGM (d) Starting points x ( t ) = 2 t Figure 6: Numerical results for Example 4.3
Remark 4.1. (1) From Figs.1–6, we can see that our proposed algorithm converges quickly and has better computationalperformance than the existing algorithms. In addition, these results are independent of the selection of initial values andthe size of dimensions. Therefore, our algorithm is robust.
2) It should be emphasized that Algorithm (MaTEGM) needs to spend more running time to achieve the same error accuracybecause it uses Armoji-type rules to automatically update the step size, and this update criterion requires to calculate thevalue of operator A many times in each iteration. However, our proposed Algorithm 1 uses previously known informationto update the step size by a simple calculation in each iteration, which makes it converge faster.(3) It is noted that operator A is pseudomonotone or strongly pseudomonotone in our numerical experiments. At this point,algorithms [27, 28, 29] for solving monotone (VIP) will not be available. Therefore, our proposed algorithm is moreapplicable for practical applications. Next, we use our proposed Algorithm 1 to solve the (VIP) that appears in optimal control problems. Recently, manyscholars have proposed different methods to solve it. We recommend readers to refer to [35, 36, 37] for the algorithms anddetailed description of the problem.
Example 4.4 (Control of a harmonic oscillator, see [38]) . minimize x (3 π ) subject to ˙ x ( t ) = x ( t ) , ˙ x ( t ) = − x ( t ) + u ( t ) , ∀ t ∈ [0 , π ] ,x (0) = 0 ,u ( t ) ∈ [ − , . The exact optimal control of Example 4.4 is known: u ∗ ( t ) = , if t ∈ [0 , π/ ∪ (3 π/ , π/
2) ; − , if t ∈ ( π/ , π/ ∪ (5 π/ , π ] . Our parameters are set as follows: N = 100 , φ = 0 . , γ = 0 . , δ = 0 . , (cid:15) n = 10 − ( n + 1) , ϕ n = 10 − n + 1 , f ( x ) = 0 . x . The initial controls u ( t ) = u ( t ) are randomly generated in [ − , , and the stopping criterion is (cid:107) u n +1 − u n (cid:107) ≤ − ormaximum iteration times. After iterations, Algorithm 1 took . seconds to reach the required error accuracy.Fig. 7 shows the approximate optimal control and the corresponding trajectories of Algorithm 1. We now consider an example in which the terminal function is not linear.
Example 4.5 (See [39]) . minimize − x (2) + ( x (2)) , subject to ˙ x ( t ) = x ( t ) , ˙ x ( t ) = u ( t ) , ∀ t ∈ [0 , ,x (0) = 0 , x (0) = 0 ,u ( t ) ∈ [ − , . /2 3 /2 5 /2 3-1-0.8-0.6-0.4-0.200.20.40.60.81 (a) Initial and optimal controls (b) Optimal trajectoriesFigure 7: Numerical results for Example 4.4 The exact optimal control of Example 4.5 is u ∗ ( t ) = if t ∈ [0 , .
2) ; − if t ∈ (1 . , . In this example, the parameters of our algorithm are set the same as in Example 4.4. After the maximum allowable iteration of times, Algorithm 1 took . seconds, but the required error accuracy was not achieved. Reaching the allowable errorrange may require more iterations. The approximate optimal control and the corresponding trajectories of Algorithm 1 areplotted in Fig. 8. (a) Initial and optimal controls (b) Optimal trajectoriesFigure 8: Numerical results for Example 4.5 Remark 4.2.
As can be seen from Examples 4.4 and 4.5, the algorithm proposed in this paper can work well on optimal controlproblems. It should be pointed out that our proposed algorithm can work better when the terminal function is linear rather thannonlinear (cf. Figs. 7 and 8). . The conclusion In this paper, based on the inertial method, the Tseng’s extragradient method and the viscosity method, we introduced anew extragradient algorithm to solve the pseudomonotone variational inequality in a Hilbert space. The main benefit of thesuggested method is that only one projection needs to be calculated in each iteration. The convergence of the algorithm wasproved without the prior information of the Lipschitz constant of the mapping. Moreover, our algorithm adds an inertial term,which greatly improves the convergence speed of the algorithm. Our numerical experiments showed that the proposed algorithmimproves some results of the existing algorithms in the literature. As an application, the variational inequality problem in theoptimal control problem was also studied.
References [1] Cuong, T.H., Yao, J.C., Yen, N.D.: Qualitative properties of the minimum sum-of-squares clustering problem. Optimiza-tion, (2020), DOI: 10.1080/02331934.2020.1778685.[2] Khan, A.A., Sama, M.: Optimal control of multivalued quasi variational inequalities. Nonlinear Anal. , 1419–1428(2012)[3] Sahu, D.R., Yao, J.C., Verma, M., Shukla, K.K.: Convergence rate analysis of proximal gradient methods with applicationsto composite minimization problems. Optimization (2020). DOI: 10.1080/02331934.2019.1702040[4] Cho, S.Y., Li, W., Kang, S.M.: Convergence analysis of an iterative algorithm for monotone operators. J. Inequal. Appl. , 199 (2013).[5] Shehu, Y., Iyiola, O.S.: Strong convergence result for monotone variational inequalities. Numer. Algorithms , 259–282(2017)[6] Tan, B., Xu, S., Li, S.: Inertial shrinking projection algorithms for solving hierarchical variational inequality problems. J.Nonlinear Convex Anal. , 871–884 (2020)[7] Malitsky, Y.: Projected reflected gradient methods for monotone variational inequalities. SIAM J. Optim. , 502–520(2015)[8] Malitsky, Y.: Proximal extrapolated gradient methods for variational inequalities. Optim. Methods Softw. , 140–164(2018)[9] Malitsky, Y.: Golden ratio algorithms for variational inequalities. Math. Program. (2019). DOI:10.1007/s10107-019-01416-w[10] Korpelevich, G.M.: The extragradient method for finding saddle points and other problems. Ekonomika i Mat. Metody. , 747–756 (1976)[11] Shehu, Y., Iyiola, O.S., Li, X.H., Dong, Q.-L.: Convergence analysis of projection method for variational inequalities.Comput. Appl. Math. , 161 (2019) 2012] Shehu,Y., Li, X.H., Dong, Q.-L.: An efficient projection-type method for monotone variational inequalities in Hilbertspaces. Numer. Algorithms , 365–388 (2020)[13] Tan, B., Fan, J., Li, S.: Self adaptive inertial extragradient algorithms for solving variational inequality problems. arXivpreprint. arXiv: 2006.04287 (2020)[14] Vuong, P.T.: On the weak convergence of the extragradient method for solving pseudo-monotone variational inequalities.J. Optim. Theory Appl. , 399–409 (2018)[15] Tseng, P.: A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. ,431–446 (2000)[16] Bot, R.I., Csetnek, E.R., Vuong, P.T.: The forward-backward-forward method from continuous and discrete perspectivefor pseudo-monotone variational inequalities in Hilbert spaces. European J. Oper. Res. , 49–60 (2020)[17] Censor, Y., Gibali, A., Reich, S.: Strong convergence of subgradient extragradient methods for the variational inequalityproblem in Hilbert space. Optim. Methods Softw. , 827–845 (2011)[18] Censor, Y., Gibali, A., Reich, S.: The subgradient extragradient method for solving variational inequalities in Hilbertspace. J. Optim. Theory Appl. , 318–335 (2011)[19] Censor, Y., Gibali, A., Reich, S.: Extensions of Korpelevich’s extragradient method for the variational inequality problemin Euclidean space. Optimization , 1119–1132 (2012)[20] Thong, D.V., Shehu, Y., Iyiola, O.S.: Weak and strong convergence theorems for solving pseudo-monotone variationalinequalities with non-Lipschitz mappings. Numer. Algorithms (2019). DOI:10.1007/s11075-019-00780-0[21] Thong, D.V., Vuong, P.T.: Modified Tseng’s extragradient methods for solving pseudo-monotone variational inequalities.Optimization , 2207–2226 (2019)[22] Polyak, B.T.: Some methods of speeding up the convergence of iteration methods, USSR. Comput. Math. Math. Phys. ,1–17 (1964)[23] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. ImagingSci. , 183–202 (2009)[24] Gibali, A., Hieu, D.V.: A new inertial double-projection method for solving variational inequalities. J. Fixed Point TheoryAppl. , 97 (2019)[25] Zhou, Z., Tan, B., Li, S.: A new accelerated self-adaptive stepsize algorithm with excellent stability for split commonfixed point problems. Comput. Appl. Math. , Article ID 220 (2020)[26] Thong, D.V., Hieu, D.V., Rassias T.M.: Self adaptive inertial subgradient extragradient algorithms for solving pseu-domonotone variational inequality problems. Optim. Lett. , 115–144 (2020)[27] Thong, D.V., Hieu D.V.: Weak and strong convergence theorems for variational inequality problems. Numer. Algorithms , 1045–1060 (2018) 2128] Yang, J., Liu, H.: Strong convergence result for solving monotone variational inequalities in Hilbert space. Numer.Algorithms , 741–752 (2019)[29] Fan, J., Qin, X.: Weak and strong convergence of inertial Tseng’s extragradient algorithms for solving variationalinequality problems. Optimization (2020). DOI:10.1080/02331934.2020.1789129[30] Cottle, R.W., Yao, J.C.: Pseudo-monotone complementarity problems in Hilbert space. J. Optim. Theory Appl. ,281–295 (1992)[31] Saejung, S., Yotkaew, P.: Approximation of zeros of inverse strongly monotone operators in Banach spaces. NonlinearAnal. , 742–750 (2012)[32] Denisov, S.V., Semenov, V.V., Chabak, L.M.: Convergence of the modified extragradient method for variational inequalitieswith non-Lipschitz operators. Cybern Syst Anal. , 757–765 (2015)[33] Beck, A., Guttmann-Beck, N.: FOM—a MATLAB toolbox of first-order methods for solving convex optimizationproblems. Optim. Methods Softw. , 172–193 (2019)[34] Hieu, D.V., Cho, Y.J., Xiao, Y.-b., Kumam, P.: Relaxed extragradient algorithm for solving pseudomonotone variationalinequalities in Hilbert spaces. Optimization (2019). DOI:10.1080/02331934.2019.1683554[35] Preininger, J., Vuong, P.T.: On the convergence of the gradient projection method for convex optimal control problemswith bang-bang solutions. Comput. Optim. Appl. , 221–238 (2018)[36] Vuong, P.T., Shehu, Y.: Convergence of an extragradient-type method for variational inequality with applications tooptimal control problems. Numer. Algorithms , 269-291 (2019)[37] Hieu, D.V., Strodiot, J.J., Muu, L.D.: Strongly convergent algorithms by using new adaptive regularization parameter forequilibrium problems. J. Comput. Appl. Math. 2020. Article ID 112844.[38] Pietrus, A., Scarinci, T., Veliov, V.M.: High order discrete approximations to Mayer’s problems for linear systems. SIAMJ. Control Optim.56