Adaptive Tuning of Feedback Gain in Time-Delayed Feedback Control
Judith Lehnert, Philipp Hövel, Valentin Flunkert, Peter Yu. Guzenko, Alexander L. Fradkov, Eckehard Schöll
aa r X i v : . [ n li n . AO ] S e p Chaos, in print, Dec. 2011
Adaptive Tuning of Feedback Gain in Time-Delayed Feedback Control
J. Lehnert, P. H¨ovel,
1, 2
V. Flunkert, P. Yu. Guzenko, A. L. Fradkov,
4, 5 and E. Sch¨oll Institut f¨ur Theoretische Physik, TU Berlin, Hardenbergstraße 36, D-10623 Berlin,Germany Bernstein Center for Computational Neuroscience, Humboldt-Universit¨at zu Berlin Philippstr. 13, 10115 Berlin,Germany SPb State Polytechnical University, Politechnicheskaya str., 29, St.Petersburg, 195251,Russia Institute for Problems of Mechanical Engineering, Russian Academy of Sciences, Bolshoy Ave, 61, V. O.,St. Petersburg, 199178 Russia SPb State University, Universitetskii pr.28, St.Petersburg, 198504 Russia (Dated: 26 April 2018)
We demonstrate that time-delayed feedback control can be improved by adaptively tuning the feedback gain.This adaptive controller is applied to the stabilization of an unstable fixed point and an unstable periodic orbitembedded in a chaotic attractor. The adaptation algorithm is constructed using the speed-gradient methodof control theory. Our computer simulations show that the adaptation algorithm can find an appropriatevalue of the feedback gain for single and multiple delays. Furthermore, we show that our method is robustto noise and different initial conditions.
The control of nonlinear systems is a central topicin dynamical system theory, with a diverse rangeof applications. Adaptive control schemes haveemerged as a new type of control method thatoptimizes the control parameters with respect toan appropriate goal function, thereby minimiz-ing, for instance, the consumed power or the timeneeded to reach the control goal. In this work wecombine time-delayed feedback control, an estab-lished method from chaos control, with an adap-tive speed-gradient scheme to optimize the con-trol force. We demonstrate how this combinedscheme can be utilized to stabilize various targetstates, e.g., unstable fixed points or periodic or-bits, with little or no apriori knowledge about thetarget state. We also investigate the robustnessof the method to noise and perturbations.
I. INTRODUCTION
Stabilization of unstable and chaotic systems forms animportant field of research in nonlinear dynamics. A vari-ety of control schemes have been developed to control pe-riodic orbits as well as steady states . A simple and effi-cient scheme, introduced by Pyragas , is known as time-delay autosynchronization (TDAS). This control methodgenerates a feedback from the difference of the currentstate of a system to its counterpart some time units τ inthe past. Thus, the control scheme does not rely on areference system and has only a small number of controlparameters, i.e., the feedback gain K and time delay τ .It has been shown that TDAS can stabilize both unstableperiodic orbits, e.g., embedded in a strange attractor as well as unstable steady states . In the first case,TDAS is most efficient and noninvasive if τ correspondsto an integer multiple of the minimal period of the orbit. In the latter case, the method works best if the time de-lay is related to an intrinsic characteristic timescale givenby the imaginary part of the system’s eigenvalue . Ageneralization of the original Pyragas scheme, suggestedby Socolar et al. , uses multiple time delays. This ex-tended time-delay autosynchronization (ETDAS) intro-duces a memory parameter R , which serves as a weightof states further in the past. In Ref. 9 it is shown that,this method is able to control an unstable fixed pointsfor a larger range of parameters compared to the originalTDAS scheme. A variety of analytic results about time-delayed feedback control are known , for instance, inthe case of long time delays , transient behavior , un-stable spatio-temporal patterns , or regarding the oddnumber limitation , which was refuted in Refs. 18 and19.In the present paper, we apply the speed-gradientmethod to adaptively tune the feedback gain K ,which is used in both TDAS and ETDAS control meth-ods, and utilize this scheme to stabilize an unstable focusin a generic model, and an unstable periodic orbit em-bedded in a chaotic attractor. The former model is thegeneric linearization of a system with an unstable fixedpoint close to a Hopf bifurcation. The speed-gradientmethod is a well known adaptive control technique thatminimizes a predefined goal function by changing an ac-cessible system parameter appropriately. The adaptationof the feedback gain may be useful, in particular, for sys-tems with slowly changing parameters or when the do-main of stability is unknown. There are several other ap-proaches to adaptive control of nonlinear systems in thecontrol literature . Here we have chosen the speed-gradient method because it is simple and robust.This paper is organized as follows: In Sec. II, we de-velop the adaptation algorithm using the example of anunstable focus. In Sec. III, we apply the adaptive controlscheme to stabilize an unstable periodic orbit embeddedin the chaotic attractor of the R¨ossler system. Finally,we conclude with Sec. IV. II. STABILIZATION OF AN UNSTABLE FIXED POINT
First, we will consider stabilization of an unstable fixedpoint by time-delayed feedback. Unlike in previous works(see e.g. and references therein), we do not fix the feed-back gain a priori, but tune it adaptively. We considera general dynamical system given by a nonlinear vectorfield f : ˙ X ( t ) = f [ X ( t )] (1)with X ∈ R n and an unstable fixed point X ∗ solving f ( X ∗ ) = 0. The stability of this fixed point is obtainedby linearizing the vector field around X ∗ . Without loss ofgenerality, let us assume X ∗ = 0. In the following we willconsider the generic case of a two-dimensional unstablefocus, i.e., a system close to a Hopf bifurcation, for whichthe linearized equations can be written in center manifoldcoordinates x, y ∈ R as follows:˙ x = λ x + ω y (2a)˙ y = − ω x + λ y, (2b)where λ and ω are positive real numbers. λ may beviewed as the bifurcation parameter governing the dis-tance from the instability threshold, i.e., a Hopf bifurca-tion, and ω is the intrinsic eigenfrequency of the focus.For notational convenience, Eq. (2) can be rewritten as˙ X ( t ) = A X ( t ) . (3) The eigenvalues Λ of the 2 × A are given byΛ = λ ± iω , so that for λ > ω = 0 the fixedpoint is an unstable focus. We now apply time-delayedfeedback control in order to stabilize this fixed point:˙ x ( t ) = λ x ( t ) + ω y ( t ) − K [ x ( t ) − x ( t − τ )] (4a)˙ y ( t ) = − ω x ( t ) + λ y ( t ) − K [ y ( t ) − y ( t − τ )] , (4b)where the feedback gain K and the time delay τ are realnumbers. We assume that the value of τ is known andappropriately chosen. Mathematically speaking, the goalof the control method is to change the sign of the realpart of the eigenvalue, leading to a decay of perturbationsfrom the target fixed point.Since the control force applied to the i th componentof the system involves only the same component, thiscontrol scheme is called diagonal coupling and is suit-able for an analytical treatment. Note that the feed-back term vanishes if the fixed point is stabilized since x ∗ ( t − τ ) = x ∗ ( t ) and y ∗ ( t − τ ) = y ∗ ( t ) for all t , indicatingthe noninvasiveness of the TDAS method.To obtain an adaptation algorithm for the feedbackgain K according to the standard procedure of the speed-gradient method , let us choose the goal functionor cost function as follows: Q ( X ) = 12 (cid:8) [ x ( t ) − x ( t − τ )] + [ y ( t ) − y ( t − τ )] (cid:9) . (5)Successful control yields Q ( X ( t )) → t → ∞ . Thespeed-gradient algorithm in the differential form is givenby ˙ K = − γ ∇ K ˙ Q , where γ > ∇ K denotes ∂/∂K . Thus, we need to calculate thegradient – with respect to the feedback gain K – of therate of change of the cost function. For the above costfunction Eq. (5) we obtain:˙ Q = [ x ( t ) − x ( t − τ )][ ˙ x ( t ) − ˙ x ( t − τ )] + [ y ( t ) − y ( t − τ ))][ ˙ y ( t ) − ˙ y ( t − τ )] . (6)The time derivatives of x and y are given by Eqs.(4). Thus, the speed-gradient method leads to the followingequation for the feedback gain:˙ K ( t ) = γ { [ x ( t ) − x ( t − τ )][ x ( t ) − x ( t − τ ) + x ( t − τ )] + [ y ( t ) − y ( t − τ )][ y ( t ) − y ( t − τ ) + y ( t − τ )] } . (7)Owing to homogeneity the right hand sides of Eqs. (4)and (7), without loss of generality the adaptation gain γ can be chosen as 1, because Eqs. (4) and (7) canbe rescaled by transformation x ( t ) −→ x ( t ) / √ γ and y ( t ) −→ y ( t ) / √ γ .Figure 1 depicts the time series of x and K accordingto Eqs. (4) and (7) for different initial conditions x (0) ∈ [0 . , .
5] in steps of 0.02 from light (green) to dark (blue)and y (0) = 0. In all simulations x ( t ) = y ( t ) = 0 for t < K ( t ) = 0 for t ≤ τ . The parameters arechosen as λ = 0 . ω = π , and τ = 1. Figure 1(a) showsthat the adaptation algorithm works for a large range ofinitial conditions. Naturally, for initial conditions close to the fixed point the goal is reached faster. If the systemstarts initially too far from the fixed point ( x (0) > . y (0) = 0) the control fails (curves not shown). Note,however, that the basin of attraction can be enlargedby increasing γ . In fact, due to the scaling, invariance,the maximum value of | x (0) | that still leads to successfulcontrol is proportional to √ γ .In Ref. 7 it was shown that in the ( K, τ )-plane tonguesexist for which the fixed point can be stabilized, i.e., fora given τ there is a K -interval for which the control issuccessful. As can be seen in Fig. 1(b), the adaptivealgorithm converges to some appropriate value of K inthis interval depending upon the initial conditions. -1.5-1-0.5 0 0.5 1 1.5 0 5 10 15 20 25 30 35 40 x ( t ) t(a)-1.5-1-0.5 0 0.5 1 1.5 0 5 10 15 20 25 30 35 40 x ( t ) t(a)-1.5-1-0.5 0 0.5 1 1.5 0 5 10 15 20 25 30 35 40 x ( t ) t(a)-1.5-1-0.5 0 0.5 1 1.5 0 5 10 15 20 25 30 35 40 x ( t ) t(a)-1.5-1-0.5 0 0.5 1 1.5 0 5 10 15 20 25 30 35 40 x ( t ) t(a)-1.5-1-0.5 0 0.5 1 1.5 0 5 10 15 20 25 30 35 40 x ( t ) t(a)-1.5-1-0.5 0 0.5 1 1.5 0 5 10 15 20 25 30 35 40 x ( t ) t(a)-1.5-1-0.5 0 0.5 1 1.5 0 5 10 15 20 25 30 35 40 x ( t ) t(a)-1.5-1-0.5 0 0.5 1 1.5 0 5 10 15 20 25 30 35 40 x ( t ) t(a)-1.5-1-0.5 0 0.5 1 1.5 0 5 10 15 20 25 30 35 40 x ( t ) t(a)-1.5-1-0.5 0 0.5 1 1.5 0 5 10 15 20 25 30 35 40 x ( t ) t(a)-1.5-1-0.5 0 0.5 1 1.5 0 5 10 15 20 25 30 35 40 x ( t ) t(a)-1.5-1-0.5 0 0.5 1 1.5 0 5 10 15 20 25 30 35 40 x ( t ) t(a)-1.5-1-0.5 0 0.5 1 1.5 0 5 10 15 20 25 30 35 40 x ( t ) t(a)-1.5-1-0.5 0 0.5 1 1.5 0 5 10 15 20 25 30 35 40 x ( t ) t(a)-1.5-1-0.5 0 0.5 1 1.5 0 5 10 15 20 25 30 35 40 x ( t ) t(a)-1.5-1-0.5 0 0.5 1 1.5 0 5 10 15 20 25 30 35 40 x ( t ) t(a)-1.5-1-0.5 0 0.5 1 1.5 0 5 10 15 20 25 30 35 40 x ( t ) t(a)-1.5-1-0.5 0 0.5 1 1.5 0 5 10 15 20 25 30 35 40 x ( t ) t(a)-1.5-1-0.5 0 0.5 1 1.5 0 5 10 15 20 25 30 35 40 x ( t ) t(a)-1.5-1-0.5 0 0.5 1 1.5 0 5 10 15 20 25 30 35 40 x ( t ) t(a)-1.5-1-0.5 0 0.5 1 1.5 0 5 10 15 20 25 30 35 40 x ( t ) t(a)-1.5-1-0.5 0 0.5 1 1.5 0 5 10 15 20 25 30 35 40 x ( t ) t(a)-1.5-1-0.5 0 0.5 1 1.5 0 5 10 15 20 25 30 35 40 x ( t ) t(a)-1.5-1-0.5 0 0.5 1 1.5 0 5 10 15 20 25 30 35 40 x ( t ) t(a) 0 1 2 3 4 5 0 2 4 6 8 10 12 K ( t ) t(b) 0 1 2 3 4 5 0 2 4 6 8 10 12 K ( t ) t(b) 0 1 2 3 4 5 0 2 4 6 8 10 12 K ( t ) t(b) 0 1 2 3 4 5 0 2 4 6 8 10 12 K ( t ) t(b) 0 1 2 3 4 5 0 2 4 6 8 10 12 K ( t ) t(b) 0 1 2 3 4 5 0 2 4 6 8 10 12 K ( t ) t(b) 0 1 2 3 4 5 0 2 4 6 8 10 12 K ( t ) t(b) 0 1 2 3 4 5 0 2 4 6 8 10 12 K ( t ) t(b) 0 1 2 3 4 5 0 2 4 6 8 10 12 K ( t ) t(b) 0 1 2 3 4 5 0 2 4 6 8 10 12 K ( t ) t(b) 0 1 2 3 4 5 0 2 4 6 8 10 12 K ( t ) t(b) 0 1 2 3 4 5 0 2 4 6 8 10 12 K ( t ) t(b) 0 1 2 3 4 5 0 2 4 6 8 10 12 K ( t ) t(b) 0 1 2 3 4 5 0 2 4 6 8 10 12 K ( t ) t(b) 0 1 2 3 4 5 0 2 4 6 8 10 12 K ( t ) t(b) 0 1 2 3 4 5 0 2 4 6 8 10 12 K ( t ) t(b) 0 1 2 3 4 5 0 2 4 6 8 10 12 K ( t ) t(b) 0 1 2 3 4 5 0 2 4 6 8 10 12 K ( t ) t(b) 0 1 2 3 4 5 0 2 4 6 8 10 12 K ( t ) t(b) 0 1 2 3 4 5 0 2 4 6 8 10 12 K ( t ) t(b) 0 1 2 3 4 5 0 2 4 6 8 10 12 K ( t ) t(b) 0 1 2 3 4 5 0 2 4 6 8 10 12 K ( t ) t(b) 0 1 2 3 4 5 0 2 4 6 8 10 12 K ( t ) t(b) 0 1 2 3 4 5 0 2 4 6 8 10 12 K ( t ) t(b) 0 1 2 3 4 5 0 2 4 6 8 10 12 K ( t ) t(b) FIG. 1. (Color online) Adaptive control of the fixed point:(a) Time series x ( t ) and (b) feedback gain K ( t ) for differentinitial conditions: x (0) ∈ [0 . , .
5] in steps of 0.02 (fromlight (green) to dark (blue); in panel (b) from top to bottom), y (0) = 0. Parameters: λ = 0 . ω = π , γ = 1, τ = 1. Figure 2 demonstrates that the algorithm works fora range of τ , i.e., for any value of τ within the do-main of stability of the TDAS control . Black emptycircles depict the transient time t c after which the con-trol goal is reached in dependence on the time delay τ .This is the case if the cost function Q becomes suffi-ciently small. We define the transient time by the bound h Q i ≡ R t c t c − τ Q ( t ′ ) dt ′ < τ × − . The dark (darkpurple) shaded regions correspond to the analytically ob-tained τ -intervals of the Pyragas control . Inside theseintervals, t c has a finite value confirming that the adap-tive control scheme adjusts the feedback gain K to anappropriate value. For a comparison with the transienttime of TDAS see Ref. 15 where a power law scaling t c ∼ ( K − K c ) − with respect to the fixed feedback gain K has been found (here K c corresponds to the bound-aries of stability). The curves corresponding to non-zeromemory parameter R (crosses and squares) will be dis-cussed below where the speed-gradient method is appliedto the ETDAS scheme.For a thorough analysis of the stability of the fixedpoint, we perform a linear stability analysis for the sys-tem Eqs. (4), (7). This system has the fixed point(0 , , K ∗ ) for any K ∗ = const . Linearization around thefixed point and the ansatz δx, δy, δK ∝ exp(Λ t ) yields atranscendental eigenvalue equation0 = det λ − K (1 − e − Λ τ ) − Λ ω − ω λ − K (1 − e − Λ τ ) − Λ 00 0 − Λ (8)= − Λ[ λ + iω − K (1 − e − Λ τ ) − Λ][ λ − iω − K (1 − e − Λ τ ) − Λ] , (9)which can be solved numerically. This equation is equalto the case of Pyragas control with constant feedbackgain considered in Ref. 7 except for the factor Λ. Thus,the adaptively controlled system has an additional eigen-value at Λ = 0. It results from the translation invarianceof the system in the direction of K on the fixed point line(0 , , K ). This means that the K values found in the caseof the standard Pyragas control lead again to a stabiliza-tion of the fixed point. The advantage of an adaptivecontroller is that an appropriate feedback gain is realizedin an automated way, i.e., without prior knowledge of thedomain of stability, as long as a stability domain existsfor this value of τ .An additional advantage of an adaptive control schemeis that it allows one to follow slow changes of the systemparameters, which are usually present in experimentalsituations. To test the ability of our adaptive controlscheme to cope with such parameter drifts, we slowlyvary λ in the following way: λ ( t ) = 0 .
01 + 1 . . t ).The result is illustrated in Fig. 3. In Fig. 3(a) the regionof stability of the standard Pyragas control in the ( λ, K ) FIG. 2. (Color online) Transient time t c after which the con-trol goal is reached in dependence of the delay time τ forTDAS (black circles) and ETDAS with R = 0 .
35 ((blue)crosses), and R = 0 .
95 ((red) squares). The dark (dark pur-ple), medium (bright purple), and light (red) shaded regionsdenote the possible range of τ for R = 0 , .
35, and 0 .
95, re-spectively. Parameters as in Fig. 1. plane (see Ref. 7) is marked by green (gray) shading.If now λ is slowly increased from its initial value 0.01, K follows the change in such a way that whenever thelower boundary of the stability region is crossed and thefixed point becomes unstable, the adaptation algorithmadjusts K such that the stable region is re-entered. Thiscreates a step-like trajectory in the ( λ, K ) plane, which isdepicted as a red (solid) curve with an arrow. Finally, if λ is decreased again, K does not change because it alreadyhas attained a value for which the control works in abroad λ -interval resulting in a horizontal trajectory in the( λ, K ) plane. Fig. 3(b) depicts the corresponding timeseries of K ( t ) as a blue (solid) curve, and of the driftingparameter λ ( t ) as a red (dashed) curve, respectively. K λ (a) 0 0.2 0.4 0.6 0.8 1 1.2 1.4 0 0.5 1 1.5 2 K λ (a) 0 0.5 1 1.5 2 0 500 1000 1500 2000 2500 3000 3500 0 0.5 1 1.5 2 K ( t ) λ t (b) FIG. 3. (Color online) Adaptive control of the fixed point forslowly drifting system parameter λ . (a) Adaptive adjustmentof K in the ( λ, K ) plane. Green (gray) shaded region: re-gion of stability of the standard Pyragas control. Red (solid)line with arrow: adaptation of feedback gain K if λ is slowlychanged ( λ ( t ) = 0 . . . t )) (b) Corresponding timeseries K ( t ) (blue solid line) and λ ( t ) (dashed red line). Otherparameters as in Fig. 1. To test the robustness of the control algorithm, we addGaussian white noise ξ i ( i = 1 ,
2) with zero mean andunity variance ( h ξ i ( t ) i = 0, h ξ i ( t ) ξ j ( t − t ′ ) i = δ ij δ ( t − t ′ ))to the system variables x and y :˙ x ( t ) = λ x ( t ) + ω y ( t ) − K ( t )[ x ( t ) − x ( t − τ )] + Dξ ( t ) (10a)˙ y ( t ) = − ω x ( t ) + λ y ( t ) − K ( t )[ y ( t ) − y ( t − τ )] + Dξ ( t ) , (10b)where D is the strength of the noise.In Fig. 4(a) the ensemble average over 200 realizations,i.e., h x ( t ) i = 1 / P i =1 x i ( t ), for D = 0 . σ x ( t ) of x ( t ) is shown as -0.6-0.4-0.2 0 0.2 0.4 0.6 0 5 10 15 20 25 30 35 40 0 0.05 0.1 0.15 0.2 0.25 0.3 〈 x ( t ) 〉 σ x ( t ) t(a)-0.6-0.4-0.2 0 0.2 0.4 0.6 0 5 10 15 20 25 30 35 40 0 0.05 0.1 0.15 0.2 0.25 0.3 〈 x ( t ) 〉 σ x ( t ) t(a) 0 5 10 15 20 0.0001 0.001 0.01 0.1 1 1e-05 0.0001 0.001 0.01 0.1 1 10 〈 K ∞ 〉 σ x ( ) D(b)
FIG. 4. (Color online) Robustness to noise. (a): Thick solid(red) curve: ensemble average h x ( t ) i of 200 realizations; thin(blue) curve: x ( t ) for one example trial; gray (green) curve:corresponding standard deviation σ x ( t ) of x ( t ) for a fixed noiseintensity D = 0 .
1. (b): (Green) crosses: standard deviation σ x (100) of h x ( t = 100) i ; dashed (blue) line: standard devia-tion of the input noise given by D ; black (red) curve: asymp-totic value K ∞ of the feedback gain. Parameters: γ = 0 . x (0) = 0 . y (0) = 0. Other parameters as in Fig.1. a gray (green) curve. The control is successful in all re-alizations: For large t the mean h x ( t ) i fluctuates aroundthe fixed point value at zero, due to the finite number ofrealizations. The standard deviation approaches a valuesmaller than the standard deviation of the input noise.This is further elaborated in Fig. 4(b), which depictsthe standard deviation σ x at t = 100 versus the noisestrength D as (green) crosses. If D becomes too largethe standard deviation exceeds the one of the input noiseindicated by the dashed (blue) line. This is the case for D & .
4. Then the control algorithm will generally fail(time series not shown here): The oscillations of x ( t ) be-come larger with increasing t . Accordingly, the standarddeviation σ x ( t ) increases with t indicating that the dy-namics is dominated by noise which forces at least someof the realizations to diverge. The black (red) curve inFig. 4(b) depicts the asymptotic value K ∞ of the feed-back gain. For intermediate noise strength, an increasedfeedback gain K compensates the influence of noise en-suring that the control is still successful. For too large D , K increases to a value beyond the domain of stabilityand stabilization cannot be achieved.We conclude that the adaptive algorithm is quite ro-bust to noise (the escape rate is vanishingly small for D . .
4) and only fails for large noise ( D & . K for all valuesof τ for which the standard Pyragas control stabilizes thefixed point and is able to follow slow drifts in the systemparameters.Note that the method still works if the control termis added only to the x -component. Then, using Q ( x ) =[ x ( t ) − x ( t − τ )] / .Next, we consider the ETDAS scheme ˙ X ( t ) = A X ( t ) − F ( t ) , (11) where the ETDAS control force F can be written as F ( t ) = K ∞ X n =0 R n [ X ( t − nτ ) − X( t − ( n + 1) τ ) ] (12a)= K " X ( t ) − (1 − R ) ∞ X n =1 R n − X ( t − nτ ) (12b)= K [ X ( t ) − X ( t − τ )] + R F ( t − τ ) . (12c)Here, R ∈ ( − ,
1) is a memory parameter that takesinto account those states that are delayed by more thanone time interval τ . Note that R = 0 recovers the TDAScontrol scheme introduced by Pyragas . The first form ofthe control force, Eq. (12a), indicates the noninvasivenessof the ETDAS method because X ∗ ( t − τ ) = X ∗ ( t ) if thefixed point is stabilized. The third form, Eq. (12c), issuited best for an experimental implementation since itinvolves states further than τ in the past only recursively.To apply a speed-gradient adaptation algorithm for thefeedback gain K , we follow the same strategy as beforeand choose the goal function as Q ( x ) = [( x ( t ) − x ( t − τ )) + ( y ( t ) − y ( t − τ )) ] /
2. Using again ˙ K = − γ ∇ K ˙ Q ,we obtain for a diagonal control scheme˙ K ( t ) = γ { ( x ( t ) − x ( t − τ ))[( x ( t ) − x ( t − τ ) + x ( t − τ )) + RS x ( t − τ )]+( y ( t ) − y ( t − τ ))[( y ( t ) − y ( t − τ ) + y ( t − τ )) + RS y ( t − τ )] } (13)with the abbreviations S x ( t ) = ∞ X n =0 R n [ x ( t − nτ ) − x ( t − ( n + 1) τ ) + x ( t − ( n + 2) τ )] = [ x ( t ) − x ( t − τ ) + x ( t − τ )] + RS x ( t − τ ) S y ( t ) = ∞ X n =0 R n [ y ( t − nτ ) − y ( t − ( n + 1) τ ) + y ( t − ( n + 2) τ )] = [ y ( t ) − y ( t − τ ) + y ( t − τ )] + RS y ( t − τ ) . (14)In Ref. 9 the domains of stability for which ETDASworks were obtained analytically. The intervals of τ in-crease with R and are larger than in the case of TDAS( R = 0).Figure 2 depicts the transient time t c in dependenceof τ for R = 0 .
35 and 0 .
95 as (blue) crosses and (red)squares, respectively. The light (red) and medium (pur-ple) shaded regions indicate the ranges of stability of τ . For odd multiples of half of the intrinsic period T ≡ π/ω , i.e., τ = T / n + 1) , n ∈ N , t c is small,demonstrating the efficiency of the adaptive algorithm.Towards the boundary of the domain of stability, t C in-creases but remains finite. The control algorithm onlyfails very close to the border of the intervals of τ . Weconclude that the adaptive control algorithm for ETDASconverges to appropriate values of K and stabilizes thefixed point even for parameters where TDAS fails. III. STABILIZATION OF AN UNSTABLE PERIODICORBIT IN THE R ¨OSSLER SYSTEM
In this section we apply the adaptive delayed feed-back control algorithm to the R¨ossler system which isa paradigmatic model for chaotic systems. The systemexhibits chaotic oscillations born via a cascade of period-doubling bifurcations and is given by the following equa-tions including the control term:˙ x ( t ) = − y ( t ) − z ( t ) − K [ x ( t ) − x ( t − τ )] (15a)˙ y ( t ) = x ( t ) + ay ( t ) (15b)˙ z ( t ) = b + z ( t )[ x ( t ) − µ ] . (15c)In the following, we fix the parameter values as a = 0 . b = 0 .
2, and µ = 6 . T ≈ . T ≈ . -10 -5 0 5 10 -10 -5 0 5 10 0 5 10z x yz (a) 0 0.2 0.4 0.6 0 20 40 60 80 100 0 2.5 5 7.5 10 K Q t(b) FIG. 5. (Color online) Adaptive control of an unstable pe-riodic orbit in the R¨ossler attractor Eq. (15). (a): Phaseportrait (after a transient time of 150 time units). (b): Timeseries of K ( t ) with adaptive control given by Eq. (16) as solid(blue) curve. The dashed (red) curve shows the goal func-tion Q . Parameters: a = 0 . b = 0 . µ = 6 . γ = 0 . τ = 5 . Pyragas type with τ = T and 0 . < K < . K : At the lower control boundarythe limit cycle should undergo a period-doubling bifur-cation, and at the upper boundary a Hopf bifurcationoccurs generating a stable or an unstable torus from alimit cycle (Neimark-Sacker bifurcation).We use Q ( x ) = [ x ( t ) − x ( t − τ )] / K :˙ K ( t ) = γ [ x ( t ) − x ( t − τ )][ x ( t ) − x ( t − τ )+ x ( t − τ )] (16)with the initial value K (0) = 0.Figure 5(a) depicts the time series of a stabilized orbitfor a time delay τ = T . Panel (b) shows that the adap-tation algorithm converges to an appropriate value of K and the cost function tends to zero.Contrary to the previous case, it is not possible to setthe adaptation gain γ to 1 by rescaling the system butthe value of γ is crucial for successful control. To explorethe role of γ , we determine the fraction of realizations f c where the control goal is reached as a function of γ .The initial conditions are Gaussian distributions with the mean h x (0) i = h y (0) i = h z (0) i = 0, respectively, and thestandard deviations are σ x (0) = σ y (0) = σ z (0) = 1. Itis assumed that the control goal is reached at time t c ifthe following condition holds: h Q i ≡ R t c t c − τ Q ( t ′ ) dt ′ < . τ .Figure 6 depicts f c ( γ ) ((red) circles) and t c ( γ ) ((blue)crosses) demonstrating that the optimal adaptation gainis around γ = 0 .
26. For γ close to this value, the algo-rithm converges fast and reliably. Accordingly, the stan-dard deviation of t c is small. t c f c γ FIG. 6. (Color online) Adaptive control of the R¨ossler system.Full (red) circles: fraction of realizations f c where the adap-tive control algorithm stabilized the orbit versus the adapta-tion gain γ ; black (blue) crosses: Average time t c after whichthe control goal is reached versus γ ; dotted (blue) lines: errorbars (standard deviation) corresponding to t c . Other param-eters as in Fig. 5. Total number of realizations: 100. This demonstrates that for appropriate values of γ , thechaotic dynamics can be controlled. IV. CONCLUSION
In summary, we have proposed an adaptive controllerbased on the speed-gradient method, to tune the feed-back gain of time-delayed feedback control to an optimalvalue. We have shown that the adaptation algorithm canfind appropriate values for the feedback gain and thusstabilize the desired periodic orbit or fixed point. Thishas been realized both for the stabilization of an unsta-ble focus in a generic model and the stabilization of anunstable periodic orbit embedded in a chaotic attractor.We have demonstrated the robustness of our method todifferent initial conditions and noise. We stress that thisadaptive controller may especially be useful for systemswith unknown or slowly changing parameters where thedomains of stability in parameter space are unknown. Inparticular, we have shown by a simulation with a driftingbifurcation parameter λ that our method is able to followsuch slow parameter drifts. It should be noted that theautomatic adjustment of the feedback gain K is possiblewithout changing the value of the adaptation gain γ ofthe speed-gradient method. This shows that the algo-rithm is robust and simple to apply. Our method mightbe used to tune more than one parameter, increasing itsrange of possible application. ACKNOWLEDGMENTS
This work was supported by Deutsche Forschungs-gemeinschaft in the framework of SFB 910. J. Lehn-ert, P. H¨ovel and E. Sch¨oll acknowledge the supportby the German-Russian Interdisciplinary Science Center(G-RISC) funded by the German Federal Foreign Officevia the German Academic Exchange Service (DAAD).P. H¨ovel acknowledges also support by the BMBF underthe grant no. 01GQ1001B. P. Guzenko thanks the DAADprogram ”Michail Lomonosov (B)” for the support of thiswork. A. L. Fradkov acknowledges support of RussianFederal Program ”Cadres” (goscontracts 16.740.11.0042,14.740.11.0942) and RFBR (project 11-08-01218). E. Ott, C. Grebogi, and J. A. Yorke, Phys. Rev. Lett. , 1196(1990). Handbook of Chaos Control , edited by E. Sch¨oll and H. G. Schus-ter (Wiley-VCH, Weinheim, 2008), second completely revisedand enlarged edition. K. Pyragas, Phys. Lett. A , 421 (1992). A. G. Balanov, N. B. Janson, and E. Sch¨oll, Phys. Rev. E ,016222 (2005). A. Ahlborn and U. Parlitz, Phys. Rev. Lett. , 264101 (2004). M. G. Rosenblum and A. S. Pikovsky, Phys. Rev. Lett. ,114102 (2004). P. H¨ovel and E. Sch¨oll, Phys. Rev. E , 046203 (2005). J. E. S. Socolar, D. W. Sukow, and D. J. Gauthier, Phys. Rev. E , 3245 (1994). T. Dahms, P. H¨ovel, and E. Sch¨oll, Phys. Rev. E , 056201(2007). M. E. Bleich and J. E. S. Socolar, Phys. Lett. A , 87 (1996). W. Just, T. Bernard, M. Ostheimer, E. Reibold, and H. Benner,Phys. Rev. Lett. , 203 (1997). W. Just, D. Reckwerth, J. M¨ockel, E. Reibold, and H. Benner,Phys. Rev. Lett. , 562 (1998). K. Pyragas, Phys. Rev. Lett. , 2265 (2001). S. Yanchuk, M. Wolfrum, P. H¨ovel, and E. Sch¨oll, Phys. Rev. E , 026201 (2006). R. Hinz, P. H¨ovel, and E. Sch¨oll, Chaos , 023114 (2011). N. Baba, A. Amann, E. Sch¨oll, and W. Just, Phys. Rev. Lett. , 074101 (2002). H. Nakajima, Phys. Lett. A , 207 (1997). B. Fiedler, V. Flunkert, M. Georgi, P. H¨ovel, and E. Sch¨oll,Phys. Rev. Lett. , 114101 (2007). W. Just, B. Fiedler, V. Flunkert, M. Georgi, P. H¨ovel, and E.Sch¨oll, Phys. Rev. E , 026210 (2007). A. L. Fradkov, Autom. Remote Control , 1333 (1979). A. L. Fradkov and A. Y. Pogromsky,
Introduction to Control ofOscillations and Chaos (World Scientific, Singapore, 1998). A. L. Fradkov, Physics-Uspekhi , 103 (2005). A. L. Fradkov,
Cybernetical Physics: From Control of Chaos toQuantum Control (Springer, Heidelberg, Germany, 2007). P. Y. Guzenko, P. H¨ovel, V. Flunkert, A. L. Fradkov, and E.Sch¨oll, Adaptive Tuning of Feedback Gain in Time-Delayed Feed-back Control, Proc. 6th EUROMECH Nonlinear Dynamics Con-ference (ENOC-2008), ed. A. Fradkov, B. Andrievsky, IPACSOpen Access Library http://lib.physcon.ru (e-Library of the In-ternational Physics and Control Society), 2008. A. L. Fradkov, I. V. Miroshnik, and V. O. Nikiforov,
Nonlinearand Adaptive Control of Complex Systems (Kluwer, Dordrecht,1999). A. Astolfi, D. Karagiannis, and R. Ortega,
Nonlinear and Adap-tive Control with Applications (Springer, Heidelberg, 2008). M. Krstic, I. Kanellakopoulos, and P. Kokotovic,
Nonlinear andAdaptive Control Design (Wiley, New York, 1995). O. Beck, A. Amann, E. Sch¨oll, J. E. S. Socolar, and W. Just,Phys. Rev. E , 016213 (2002). Y. A. Astrov, A. L. Fradkov, and P. Y. Guzenko, Phys. Rev. E , 026201 (2008). V. Flunkert and E. Sch¨oll, Phys. Rev. E84