A mathematical framework for amplitude and phase noise analysis of coupled oscillators
aa r X i v : . [ n li n . AO ] M a y A mathematical framework for amplitude and phase noiseanalysis of coupled oscillators
M. Bonnin ∗ , F. Corinto and V. Lanza Department of Electronics and telecommunications, Politecnico di Torino Normandie Univ, France; ULH, LMAHMay 9, 2019
Abstract
Synchronization of coupled oscillators is a paradigm for complexity in many areas of science andengineering. Any realistic network model should include noise effects. We present a descriptionin terms of phase and amplitude deviation for nonlinear oscillators coupled together through noisyinteractions. In particular, the coupling is assumed to be modulated by white Gaussian noise. Theequations derived for the amplitude deviation and the phase are rigorous, and their validity is notlimited to the weak noise limit. We show that using Floquet theory, a partial decoupling betweenthe amplitude and the phase is obtained. The decoupling can be exploited to describe the oscillator’sdynamics solely by the phase variable. We discuss to what extend the reduced model is appropriateand some implications on the role of noise on the frequency of the oscillators.
Periodically driven oscillators and coupled oscillators are classical problems in nonlinear dynamics,with many relevant applications in physics, chemistry, biology and engineering [1–3]. To make themodels more realistic, external inputs can be included, to represent the unavoidable random fluctua-tions that occur in real world systems, due to the physical properties of the oscillators or induced bythe environment. Such disturbances can be modeled by stochastic forces applied to the oscillators,which are then described by stochastic differential equations [4, 5].Corrupting noise can dramatically affect the performance of oscillators. This is of particularrelevance, for instance, in the field of modern electronic devices. Phase noise in oscillators can producedistortion or complete loss of incoming information in traditional receivers, and high bit error ratesin phase modulated applications. Traditionally, the action of noise on electronic oscillators has beendescribed as purely diffusive process [6, 7]. It is commonly assumed that the effect of white noise onthe spectrum of an oscillator is to produce a broadening of the oscillator’s spectrum without affectingthe positions of the peaks. Recently, this assumption has been questioned by the analysis of somesimple solvable models, and by the development of improved mathematical techniques [8–10]. Theseworks have shown that the phase noise problem is best described as a convection–diffusion process,i.e. white noise may also be responsible for a shift in the oscillator’s angular frequency. ∗ [email protected] t may sound surprising that a random perturbation can produce some kind of coherent modifi-cation to the oscillator’s frequency. In fact one may expect that, as a result of their random nature,fluctuations have a null net effect and leave the oscillation frequency and amplitude unaffected. Ran-dom perturbations may produce a coherent modification to the oscillator’s frequency because of thepeculiar characteristics of oscillators. First, the autonomous nature of oscillators implies that anytime shifted version of a solution is still a solution. The consequence is that phase shifts are notabsorbed, but rather they accumulate in time. Second, oscillators are nonlinear dynamical systems.Some directions are preferred to others, so that perturbations along some directions are amplified,while others are attenuated. The result is that coherent behavior can emerge from random excitations.In the last few years the idea on the role of noise has also changed at a more fundamental level. Forlong time noise has been considered a nuisance to be reduced as much as possible. Only recently it wasfigured out that noise can play a constructive role in natural phenomena or engineering applications.Important examples are stochastic resonance, where a periodic signal is amplified by noise [11–13],and energy harvesting, where noise is used as a power source [14, 15]. It also been recognized thatin particular situations, noise can favor the synchronization of oscillators. Synchronization is theresult of two competing mechanisms. On the one hand, differences in the oscillators free runningfrequencies are destructive to synchronization. On the other hand, couplings between the oscillatorsfavor the emergence of collective rhythms. Similarly, can at the same work both towards and againstsynchronization. The phase diffusion produced by noise obviously acts against the synchronicity,while the frequency shift produced by noise may decrease the frequency mismatch between oscillatorsthus enhancing the emergence of locked states [16, 17].To study the balance between the phase diffusion and the frequency shift, one needs a mathe-matical model capable to capture the influence of noise on the phase of the oscillators. In this paperwe present a mathematical framework to reduce a network of oscillators subject to white Gaussiannoise described in terms of state variables to the equivalent amplitude and phase model. In section2 we introduce some basic concepts about noisy oscillators and the theory of stochastic differentialequations, that represents the ideal mathematical framework for the analysis of such problems. Insection 3 we derive the main result of the paper, giving a rigorous description in terms of amplitudeand phase variables for a network composed by nonlinear coupled oscillators of any order, subjectto white Gaussian noise. We also show how the phase dynamics can be partially decoupled fromthe amplitude dynamics, thus suggesting the possibility to derive reduced order model analogous tothe celebrated Kuramoto model [1]. In section 4 we discuss some implications of our model on thephase dynamics, with particular attention to the role of the noise on the expected frequencies of theoscillators. We discuss the physical origin of the frequency shift and why such an effect should notbe neglected with respect to the phase diffusion process. In section 5 we present a simple exampleto show the application of the transformation to amplitude and phase variable. The example chosenadmits an analytical solution to illustrate the influence of noise on a small network. Section 6 isdevoted to conclusions. Nonlinear oscillators can be conveniently described by the differential equation d X ( t ) dt = a ( X ( t ) , ξ ( t )) (1)where x : R R n is the state of the oscillator, and ξ represents the unavoidable noise sources, bothinternal and external, that affect the oscillator. In most practical situation the noise level is expectedto be small with respect to the oscillator state, so that we are legitimate to linearize equation (1) round the noiseless state d X ( t ) dt = a ( X ( t ) ,
0) + ∂ a ( X ( t ) , ∂ ξ ξ ( t ) + . . . (2)Another common assumption is that the noise possesses some “nice” statistical properties. In particu-lar we shall assume that any noise source can be modeled as a Gaussian white noise. This assumptionis justified in a wide range of practical situations, e.g. molecular dynamics, thermal noise, shot noiseand Johnson noise in electronics. In general, white noise is a good approximation to a colored noiseprocess in the case where the typical time scales of the underlying deterministic dynamics are muchsmaller than the noise correlation time (quasi–white approximation). By the central limit theorem, itis reasonable to describe ξ ( t ) as Gaussian distributed. Equation (2) can be rewritten as the stochasticdifferential equation (SDE) [4, 5] d X = a ( X ) dt + ε B ( X ) d W (3)where X : R R n is the state of the oscillator, a : R n R n is a vector field that describes theoscillator dynamics, B : R n R n,m is a modulating real valued matrix, and ε is a parameter thatmeasures the noise intensity. W : R R m is a vector of Wiener processes, also called Brownianmotions, a continuous time stochastic process characterized by zero expectation value E [ W ] = 0,independent increments, and with a Gaussian distribution. The vector valued function a ( X ) iscalled the drift term, while the matrix B ( X ) is called diffusion the diffusion term. For matrices B with constant entries the noise is said additive, while for a state dependent matrices B ( X ) the noiseis said multiplicative.Stochastic processes are nowhere differentiable, consequently the SDE (3) should be interpretedas a shorthanded version of the integral equation X ( t ) = X (0) + Z t a ( X ( s )) ds + ε Z t B ( X ( s )) d W ( s ) (4)The first integral on the right hand side is a Riemann integral, and it does not pose any particularproblem. The second integral is a Riemann–Stieltjes type integral, but differently from commonRiemann–Stieltjes integrals, the point at which the function is evaluated do matter. The two maininterpretation schemes are Stratonovich and Itˆo. According to Stratonovich, the stochastic integral R t B ( X ( s )) ◦ dW ( s ) is defined as the mean square limit of the partial sum [5] S Sn = n X i =1 B (cid:18) X ( t i ) + X ( t i − )2 (cid:19) [ W ( t i ) − W ( t i − )]By contrast, in Itˆo interpretation the stochastic integral R t G ( s ) dW ( s ) is the mean square limit of [5] S In = n X i =1 G ( t i − ) [ W ( t i ) − W ( t i − )]Both interpretations have their own pros and cons. The main advantage of Stratonovich interpre-tations is that traditional calculus rules apply. The drawback is that in each time interval, boththe initial value X ( t i − ) and the final value X ( t i ) of the stochastic process X are required to solvethe SDE. This feature is known as the “look in the future property” of Stratonovich integral. Asa consequence the Stratonovich interpretation is not well suited for numerical integration schemes. Conforming to the standard notation, we use the symbol B ( X ) ◦ d W to denote Stratonovich integral, while we reservenotation B ( X ) d W for Itˆo integral. oreover, in Stratonovich interpretation the stochastic process and the noise increments are cor-related, making the determination of stochastic expectations difficult. By contrast, Itˆo stochasticintegral only requires the initial value of the stochastic process in each time interval. Therefore Itˆointerpretation is preferred in the implementation of numerical integration schemes. In Itˆo SDEs thestochastic process and the noise increments are uncorrelated, making the determination of stochasticexpectations easier. The drawback of Itˆo view is that a new set of calculus rules, known as Itˆo calculusmust be used.The relevant consequence of the two different interpretations is that the same SDE has differentsolutions whether it is interpreted following Stratonovich or Itˆo. However, the two interpretations arelinked by a transformation that converts any Stratonovich (respectively Itˆo) SDE into an equivalentItˆo (respectively Stratonovich) SDE. By equivalent we mean a different SDE, interpreted with differentrules, but that has the same solution [4, 5]. The equivalence opens the possibility to switch from oneinterpretation to the other to take advantage of the pros of both the definitions. The StratonovichSDE (the apex S and I denote Stratonovich and Itˆo, respectively) d X = a S ( X ) dt + ε B S ( X ) ◦ d W (5)is equivalent to the Itˆo SDE d X = a I ( X ) dt + ε B I ( X ) d W (6)where ( a i is the i -th component of a , B ij the ( i, j ) element of B ) a Ii ( X ) = a Si ( X ) + ε X j,k ∂B ij ∂x k B jk (7)and B S ( X ) = B I ( X ) (8)In the following we shall use Itˆo interpretation, omitting the apex I for simplicity. A network composed of N weakly coupled nonlinear oscillators with noisy interactions can be de-scribed, similarly to (3), by the (Itˆo) SDEs d X i = [ a i ( X i ) + ε c i ( X , . . . , X N )] dt + ε B i ( X , . . . , X N ) d W i i = 1 , . . . , N (9)where X i : R R n is a stochastic process describing the state of the i –th oscillator, a i : R n R n is the i –th drift coefficient, B i : R n · N R n,m is the i –th n × m diffusion matrix, and W i : R R m is the i –th vector of Wiener processes. For the sake of simplicity, in equation (9) we assume that alloscillators are of the same order ( X i ∈ R n , for all i ), but we allow the interaction to vary for eachoscillator both in the modulating matrix B i and in the random fluctuation W i . Such a model mayarise, for instance, if both a i and c i have some small stochastic components, and they are linearizedaround a noiseless state.For ε = 0, the SDE (9) reduce to an ordinary differential equation (ODE) describing N indepen-dent, noiseless oscillators. The i –th oscillator is described by the ODE d x i ( t ) dt = a i ( x i ( t )) (10) Oscillators of different order do not pose any particular problem, they only make the notation more involved. Thetheorems 1 and 2 can be formulated, mutatis mutandis, for oscillator of different orders. i ( t ) x S i ( t ) x S i ( θ i ( t )) R i ( t ) X i ( t ) x S i ( t ) x S i ( θ i ( t )) R i ( t ) Figure 1: Two possible decompositions of the stochastic process X i ( t ). At the time t the process isdecomposed as X i ( t ) = x S i ( θ i ( t ))+ Y i ( θ i ( t )) R i ( t ) using two different basis vectors. Left: orthogonalbasis. Right: “oblique” basis. Red line is the stochastic process X i , blue line is the limit cycle x S i ( t )shown for reference. We assume that the ODE (9) admits an asymptotically stable T i –periodic solution, represented by alimit cycle x S i ( t ) in its state space. For each oscillator we define the vector u i ( t ) = a i ( x S i ( t )) | a i ( x S i ( t )) | (11) u i ( t ) is the unit vector that at each time instant is tangent to the limit cycle x S i ( t ). Together with u i ( t ) we consider other n − u i ( t ) , . . . , u n i ( t ), such that the set { u i ( t ) , . . . , u n i ( t ) } is a basisfor R n for all t . Let U i ( t ) = [ u i ( t ) , . . . , u n i ( t )] be the matrix whose columns are u i ( t ) , . . . , u n i ( t ).Such a matrix is obviously invertible, and let V i ( t ) = U − i ( t ) be the inverse. We define the reciprocalvectors v T i ( t ) , . . . , v Tn i ( t ) to be the rows of V i ( t ). By construction, the u i and v i vectors are bi–orthogonal, i.e. u Tα i ( t ) v β i ( t ) = δ αβ where δ αβ is the Kronecker’s symbol. We shall also use the matrices Y i ( t ) = [ u i ( t ) , . . . , u n i ( t )], Z i ( t ) = [ v i ( t ) , . . . , v n i ( t )], and the modulus of the vector field r i ( t ) = | a i ( x S i ( t )) | evaluated overthe limit cycle.A crucial concept to be defined in the analysis of synchronization of oscillators is the phaseconcept. A phase function is intended to represent the projection of the oscillator’s state onto areference trajectory, normally the unperturbed limit cycle. For each oscillator we introduce a phasefunction θ i : R n [0 , T i ), interpreted as an elapsed time from an initial reference point. Consider apoint x S i (0) on the limit cycle, and assign phase zero to this point, i.e. θ i ( x S i (0)) = 0. The phaseof the point x S i ( t ) is θ i ( x S i ( t )) = t, mod T i . Thus, the phase represents a new parametrization ofthe limit cycle. Together with the phase function we shall consider an amplitude deviation function R i : R n R n − , with θ i , R i ∈ C m ( R n ), m ≥
2. The amplitude function R i is interpreted as anorbital deviation from the limit cycle x S i ( t ), see figure 1.The following theorem establishes the amplitude and phase equation for the network Theorem 1.
Consider the Itˆo diffusion (9) , and the reciprocal bases { u i ( t ) , . . . , u n i ( t ) } and { v i ( t ) , . . . , v n i ( t ) } ,satisfying the bi–orthogonality condition v Tα i u β i = δ αβ , for all i = 1 , . . . , N . Consider the coordinatetransformation x i = h i ( θ i , R i ) = x S i ( θ i ( t )) + Y i ( θ i ( t )) R i ( t ) (12) We shall use the term “amplitude” instead of the more correct “amplitude deviation” for the sake of simplicity. hen in a neighborhood of the limit cycle x S i the phase θ i ( t ) and the amplitude R i ( t ) are Itˆo processesand satisfy dθ i = (cid:2) a θ i ( θ i , R i ) + ε ˆ a θ i ( θ . . . R N ) + εc θ i ( θ . . . R N ) (cid:3) dt + ε B θ i ( θ . . . R N ) d W i (13a) d R i = (cid:2) L i ( θ i ) R i + a R i ( θ i , R i ) + ε ˆ a R i ( θ . . . R N ) + ε c R i ( θ . . . R N ) (cid:3) dt + ε B R i ( θ . . . R N ) d W i (13b) where ( θ . . . R N ) is a shorthanded notation for ( θ , R , . . . , θ N , R N ) and (we omit explicit dependenceon θ i and t for simplicity) a θ i ( θ i , R i ) = (cid:18) r i + v T i ∂ Y i ∂θ i R i (cid:19) − v T i (cid:20) a i ( x S i + Y i R i ) − a i ( x S i ) − ∂ Y i ∂θ i R i (cid:21) (14a)ˆ a θ i ( θ . . . R N ) = − (cid:18) r i + v T i ∂ Y i ∂θ i R i (cid:19) − v T i (cid:20) ∂ Y i ∂θ i B R i B Tθ i + 12 (cid:18) ∂ a i ( x S i ) ∂θ i + ∂ Y i ∂θ i R i (cid:19) B θ i B Tθ i (cid:21) (14b) c θ i ( θ i , R i ) = (cid:18) r i + v T i ∂ Y i ∂θ i R i (cid:19) − v T i c i ( x S + Y R , . . . , x S N + Y N R N ) (14c) B θ i ( θ . . . R N ) = (cid:18) r i + v T i ∂ Y i ∂θ i R i (cid:19) − v T i B i ( x S + Y R , . . . , x S N + Y N R N ) (14d) L i ( θ i ) = − Z Ti ∂ Y i ∂θ i (14e) a R i ( θ i , R i ) = − Z Ti (cid:20) ∂ Y i ∂θ i R i a θ i − a i ( x S i + Y i R i ) (cid:21) (14f)ˆ a R i ( θ . . . R N ) = − Z Ti (cid:20) ∂ Y i ∂θ i R i ˆ a θ i + ∂ Y i ∂θ i B R i B Tθ i + 12 (cid:18) ∂ a i ( x S i ) ∂θ i + ∂ Y i ∂θ i R i (cid:19) B θ i B Tθ i (cid:21) (14g) c R i ( θ . . . R N ) = − Z Ti ∂ Y i ∂θ i R i c θ i ( x S + Y R , . . . , x S N + Y N R N ) (14h) B R i ( θ . . . R N ) = Z Ti (cid:20) B i ( x S + Y R , . . . , x S N + Y N R N ) − ∂ Y i ∂θ i R i B θ i (cid:21) (14i) Proof:
See appendix A.The amplitude and phase equations (13a) and (13b) are exact, since no approximation is involvedin their derivation, and they are valid for any value of the noise intensity ε as long as the Jacobianmatrices D h i are regular. The amplitude and phase equations obtained crucially depend on thechoice of the basis vectors u i , . . . , u n i .In general, the equations for the two Itˆo processes for the phase and for the amplitude are coupledtogether. It is possible to show that, making use of Floquet theory, a partial decoupling betweenthe phase and the amplitude dynamics is obtained. Before introducing the theorem we recall themain results of the Floquet theory [6, 18]. Let A i ( t ) = ∂ a i ( x Si ) ∂ x i be the Jacobian matrix of the i –th oscillator evaluated on the limit cycle x S i ( t ), and let Φ i ( t ) be the fundamental matrix of thevariational equation d y i ( t ) dt = A i y i ( t ) . hus, from Floquet theory we get: Φ i ( t ) = P i ( t ) e D t P − i (0) , (15)where P i ( t ) is a T i –periodic matrix, and D i = diag[ ν i , . . . , ν n i ] is a diagonal matrix whose diagonalentries are the Floquet characteristic exponents [6, 18]. Theorem 2.
If the vectors u i ( t ) , . . . , u n i ( t ) are chosen such that [ r i u i ( t ) , . . . , u n i ( t )] = P i ( t ) , then the Itˆo processes (13a) and (13b) become dθ i = (cid:2) a θ i ( θ i , R i ) + ε ˆ a θ i ( θ . . . R N ) + ε c θ i ( θ . . . R N ) (cid:3) dt + ε B θ i ( θ . . . R N ) d W i (16a) d R i = (cid:2) e D i R i + ˜ a R i ( θ i , R i ) + ε ˆ a R i ( θ . . . R N ) + ε c R i ( θ . . . R N ) (cid:3) dt + ε B R i ( θ . . . R N ) d W i , (16b) where e D i = diag [ ν i , . . . , ν n i ] and the Taylor series of ˜ a θ i ( θ i , R i ) and ˜ a R i ( θ i , R i ) do not containlinear terms in R i .Proof: See appendix B.The asymptotic stability hypothesis of the limit cycle implies that the Floquet characteristicexponents ν i , . . . , ν n i have negative real parts, for all i = 1 , . . . , N . As a consequence, in thelimit ε ≪
1, the amplitude asymptotic dynamics is one order of magnitude slower than the phasedynamics. This observation suggests the idea to neglect the amplitude dynamics given in (41b),and to approximate the stochastic processes R i in (41a) with some reasonable (possibly constant)estimate. This approach leads to the so called “phase reduced models”. For instance, assuming theunperturbed value R i ≈
0, and taking into account that a θ i ( θ i ,
0) = 0 we obtain the simple phaseequation dθ i = (cid:0) ε ˆ a θ i ( θ i ) (cid:1) dt + ε B θ i ( θ (1) , . . . , θ ( N ) ) d W i (17)We remark that the assumptions leading to (15) is justified only if the amplitudes relax instanta-neously to the unperturbed value, an assumption that is often made more for mathematical con-venience than being physically plausible. In fact, this assumption relies on linear approximation ofmanifolds, and nonlinear effects will become stronger the further we move away from the limit cy-cle. Moreover, the presence of nearby invariant structures such as equilibrium points and invariantmanifolds may result in trajectories spending long periods of time away from the limit cycle, thusnullifying the instantaneous relaxation hypothesis. As a consequence a better solution is to chose theapproximation R i ≈ E [ R i ], provided that the expected amplitude can be computed [10]. A full analysis of the amplitude and phase equations (13a), (13b) is a formidable problem and willnot be treated here. We limit ourselves to some considerations and some general comments.Equations (13a), (13b), or the phase reduced model (17) suggest that the drift effects due to ε ˆ a θ i become negligible in the limit of vanishing small noise ( ε → ε , the drift effects may become significant if ˆ a θ i become large enough. That thedrift effects should not be neglected even for small values of ε can be justified as follows. Whendealing with stochastic processes, the single realization is not very much significant: it is often muchmore useful to look at expected quantities. To illustrate the point, it is sufficient to consider a single scillator described by a phase reduced model, the full network model (13a), (13b) is conceptuallyanalogous. Let f ( θ ) be an arbitrary scalar function of the phase, and let u ( t, θ ) = E [ f ( θ )] be theexpected value of this function, with initial value u (0 , θ ) = f ( θ ). Then the time evolution of u ( t, θ )is governed by the Kolmogorov Backward Equation [5] ∂u∂t = A u, (18)where A is the generator of the Itˆo diffusion A f ( θ ) = (cid:0) ε ˆ a θ ( θ ) (cid:1) ∂f ( θ ) ∂θ + ε (cid:0) B θ ( θ ) B Tθ ( θ ) (cid:1) ∂ f ( θ ) ∂θ (19)Equation (19) shows that both the O ( ε ) drift coefficient and the O ( ε ) diffusion coefficient in (17)contribute for O ( ε ) terms to the evolution of expected quantities in (18), (19), and therefore we arenot allowed to neglect one contribution with respect to the other.Expected quantities can be determined using Itˆo calculus without solving the Kolmogorov Back-ward Equation (19). In fact, let X be the solution of an Itˆo SDE, and let f be a non anticipatingfunction (adapted process), then the zero expectation property of Itˆo stochastic integral holds E (cid:20)Z tt f ( X ) d W (cid:21) = 0 (20)Taking the stochastic expectation on both sides of equations (13a), (13b) and using the zero expec-tation property we can transform the SDEs for the amplitude and phase into a set of ODEs for theexpectation values E (cid:20) dθ i dt (cid:21) =1 + E (cid:2) a θ i ( θ i , R i ) (cid:3) + εE [ c θ i ( θ . . . R N )] + ε E (cid:2) ˆ a θ i ( θ . . . R N ) (cid:3) (21a) dE (cid:2) R i (cid:3) dt = E (cid:2) L ( θ i ) R i (cid:3) + E (cid:2) a i ( θ i , R i ) (cid:3) + εE [ c R i ( θ . . . R N )] + ε E (cid:2) ˆ a R i ( θ (1) . . . R ( N ) ) (cid:3) (21b)where the property dE [ θ ] /dt = E [ dθ/dt ] has been used. The problem here is the nonlinear nature ofthe ODEs. In fact, to compute the expectation of the nonlinear functions one needs all the momentsfor the amplitudes and the phases. We illustrate the issue for the function a θ i for the simple case ofa scalar amplitude R i . Taking the Taylor series in the neighborhood of θ i = 0, R i = 0 we have E (cid:2) a θ i ( θ i , R i ) (cid:3) = a θ i (0 ,
0) + ∂a θ i (0 , ∂θ i E (cid:2) θ i (cid:3) + ∂a θ i (0 , ∂R i E (cid:2) R i (cid:3) + 12 ∂ a θ i (0 , ∂θ i E (cid:2) θ i (cid:3) + 12 ∂ a θ i (0 , ∂R i E (cid:2) R i (cid:3) + ∂ a θ i (0 , ∂R i ∂θ i E (cid:2) R i θ i (cid:3) . . . (22)Therefore, to compute E [ θ i ] and E [ R i ], one needs all the moments of θ i , R i , i.e. the system isopen. To close the system various approaches are available [20, 21]. Among others, moment closuretechniques are procedures to approximate the exact (but open) moment dynamics with a closed (butapproximate) system. A relatively simple closure technique amounts to assume that higher ordermoments can be expressed in terms of the lowest order ones, assuming that the stochastic processessatisfy certain distribution laws (e.g. Gaussian distribution). Equation (22) is also instrumentalto show the limit of the phase reduction method. Although the amplitude is expected to remainsmall ( E (cid:2) R i (cid:3) ≈ a θ i and ˆ a R i areartifacts due to Itˆo interpretation. However, it turns out that the frequency drift is also present f Stratonovich interpretation is used [8–10]. To clarify the point, consider the Stratonovich SDEdescribing the network of oscillators d X i = [ a i ( X i ) + ε c i ( X , . . . , X N )] dt + ε B i ( X , . . . , X N ) ◦ d W i i = 1 , . . . , N (23)Taking into account that in Stratonovich interpretation traditional calculus rules apply, repeating theprocedure used in the previous section the following reduced phase model is derived dθ i = [1 + εc θ i ( θ , . . . , θ N )] dt + ε B θ i ( θ , . . . , θ N ) ◦ d W i (24)However, in Stratonovich interpretation is no longer true that the stochastic processes and the noiseincrements are uncorrelated. Because of the anticipating nature discussed in section 2, E [ R tt f ( X ) ◦ d W ] = 0. To resolve the correlation, a Stratonovich SDE has to be transformed into its equivalentItˆo SDE by the addition of the drift correction term [4, 5]. Here is where the drift coefficient, thatarises naturally from the quadratic terms in Itˆo formula, comes into play [8–10]. In this section we give an example to show the derivation of the phase and amplitude deviationequations starting from the network’s state equations. To keep everything reasonably simple, weconsider a network composed by two oscillators written in polar coordinates dρ i = ρ i (1 − ρ i ) dt + ε ρ i dW ρ i (25a) dφ i = [ ν i ρ i + ε ( φ j − φ i )] dt + ερ j dW φ i (25b)for i, j = 1 ,
2, and j = i . The real parameters ν i define the free running frequencies of the oscillatorsin absence of noise. Although most of the information concerning expectation values and phaselocking can be directly obtained from equations (25a)–(25b), we first transform these equations intothe equivalent amplitude and phase models using theorems 1 and 2 to show the application of themethod.For ε = 0, equation (25a) admits two stationary solutions: ρ i = 0 that corresponds to an unstableequilibrium point, and ρ i = 1, that corresponds to an asymptotically stable limit cycle with angularfrequency dφ i /dt = ν i . Without loss of generality, we shall assume ν > ν . To investigate thesynchronization of the two oscillators we look at the phase difference ψ = φ − φ . The oscillatorsare phase locked if the phase difference remains constant in time. If the two oscillators have differentfree running frequencies ν = ν , the phase difference ψ = ( ν − ν ) t grows unboundedly large (inabsolute value) as the time passes. Conversely, if the coupling effect is taken into account but thenoise influence is ignored, the phase difference evolves according to dψdt = ν − ν − εψ Asymptotically the two oscillators become phase locked with phase difference ψ s = ν − ν ε (26)Moreover, since dψ/dt > ψ < ψ s and dψ/dt < ψ > ψ s , the phase locked state isasymptotically stable. .1 Amplitude and phase equations using an orthogonal frame Each uncoupled oscillator of the network (25a)–(25b) admits the limit cycle x S i ( ρ i , φ i ) = (cid:20) ν i t (cid:21) (27)It follows that the orthogonal basis is given by the tangent vector u i ( t ) = [0 , T and the orthogonalunit vector u i ( t ) = [1 , T . Since the matrix U i ( t ) = [ u i ( t ) , u i ( t )] is orthogonal, we have v i ( t ) = u i ( t ) and v i ( t ) = u i ( t ). The change of coordinates (12) implies ρ i = 1 + R i and φ i = ν i θ i . Sincethe Jacobian of the transformation is | D h i ( θ i , R i ) (cid:12)(cid:12) R i =0 = ν i , the coordinate transformation holds forany value of the noise intensity. Using theorem 1 it is straightforward to derive the amplitude andphase equations dθ i = (cid:20) R i + ε (cid:18) ν j ν i θ j − θ i (cid:19)(cid:21) dt + ε R j ν i W φ i (28a) dR i = [ − R i (1 + R i )] dt + ε (1 + R i ) dW ρ i (28b)As it was expected from theorem 2, in the phase equation the drift coefficient contains a linear termin R i . The Jacobian matrix of (25a), (25b) for ε = 0 evaluated over the limit cycle for ε = 0 is A i ( x S i ) = (cid:20) − ν i (cid:21) (29)with eigenvalues λ i = 0, λ i = −
1, for all i . The corresponding eigenvectors are the Floquet vectors u i ( t ) = [0 , T and u i ( t ) = [1 , − ν i ] T . Inverting the matrix U i ( t ) = [ u i ( t ) , u i ( t )] we find theFloquet co–vectors v i ( t ) = [ ν i , T and v i ( t ) = [1 , T . The relation between the old and the newcoordinates is ρ i = 1 + R i and φ i = ν i ( θ i − R i ). As before the Jacobian matrix D h i is regular on thewhole plane θ i , R i . Consequently the phase and amplitude equations in the new basis are dθ i = (cid:26) − R i + ε (cid:20) ν j ν i ( θ j − R j ) − ( θ i − R i ) (cid:21)(cid:27) dt + ε (cid:20) µ i (1 + R i ) dW ρ i + 1 + R j ν i dW φ i (cid:21) (30a) dR i = − [ R i (1 + R i )] dt + εµ i (1 + R i ) dW ρ i (30b)Comparing (28a) to (30a) we observe that according to theorem 2, the latter has a drift coefficientthat starts with a quadratic term in R i . The use of Floquet basis also emphasizes the role played byhigher order moments, e.g. the variance, on the angular frequencies of the oscillators.In this particular example E [ R i ] and E [ R i ] can be determined analytically, because equation(30b) depends on R i only. Thus we can write a one dimensional Fokker–Planck equation (see [4]) forthe probability density function (PDF) of the amplitude variable ∂p i ( R i , t ) ∂t = − ∂∂R i [ − R i (1 + R i ) p i ( R i , t )] + ε ∂ ∂R i (cid:2) (1 + R i ) p i ( R i , t ) (cid:3) (31)In the limit t → + ∞ it admits the stationary solution p i ( R i ) = N (1 + R i ) exp (cid:26) ε [ln(1 + R i ) − R i ] (cid:27) (32) P D F R i ε = 0 . ε = 0 . ε = 0 . Figure 2: Probability density function for the amplitude deviation given by (32), for different values ofthe noise intensity ε . E [ R i ] ε ε E [ R i ] Figure 3: Left: Expected amplitude E [ R i ] versus the noise intensity ε . Right: Expected squared ampli-tude E [ R i ] versus the noise intensity ε . where N is normalization constant that can be determined through the requirement R + ∞−∞ p i ( R i ) dR i =1. Using the stationary PDF we can compute the expectation value for an arbitrary function of theamplitude through E [ f ( R i )] = Z + ∞−∞ f ( R i ) p i ( R i ) dR i (33)Figure 2 shows the stationary PDF p i ( R i ) for different values of the parameter ε . For small noiseintensity the PDF is well approximated by a Gaussian distribution. Increasing the noise intensity,we observe an increase in the variance (due to diffusion) and a shift in the mode (the most probablevalue of the amplitude). We also observe the PDF becomes asymmetric with respect to the maximumvalue, thus indicating that higher order moments become more and more relevant. Figure 3 showsthe first two moments for the amplitude deviation. It is seen the quadratic dependence on the noiseintensity as predicted by theorem 1.Taking stochastic expectations of (30a) and using the zero expectation property of Itˆo integralwe have E (cid:20) dθ i dt (cid:21) =1 − E (cid:2) R i (cid:3) + ε (cid:20) ν j ν i ( E [ θ j ] − E [ R j ]) − ( E [ θ i ] − E [ R i ]) (cid:21) (34) .2 0.25 0.3 0.35 0.4−2−1.8−1.6−1.4−1.2−1−0.8−0.6 ε E [ ψ ] with noiseno noise
200 400 600 800−10.5−10−9.5−9 t ψ ( t ) with noiseno noise Figure 4: Left: Comparison between the expected asymptotic phase difference E [ ψ ] obtained solving (36)and the phase difference given by (26), versus the noise intensity ε for two oscillators with different freerunning angular frequencies, ν = 1 and ν = 2. Right: Phase difference for two oscillators (free runningangular frequencies are ν = 1 and ν = 2 respectively), as a function of time for a specific realization ofthe noise. The noise intensity is set to ε = 0 .
05. The phase difference in absence of noise is shown forreference.
Multiplying by ν i it is straightforward to obtain the equation for the expected phase difference dE [ ψ ] dt = ν i − ν j + ν j E [ R j ] − ν i E [ R i ] − ε ( ν j E [ R j ] − ν i E [ R i ]) − εE [ ψ ] (35)Since the amplitude equation is the same for all the oscillators , we have E [ R ] = E [ R ] and E [ R ] = E [ R ], then dE [ ψ ] dt = ( ν − ν ) (cid:0) − E [ R ] + 2 εE [ R ] (cid:1) − εE [ ψ ] (36)Asymptotically the oscillators converge to the phase locked state E [ ψ ] = ( ν − ν ) (cid:0) − E [ R ] + 2 εE [ R ] (cid:1) ε (37)which is different from the phase locked state in absence of noise (26). The phase difference inpresence of noise is compared with that obtained without noise in figure 4. On the left we can seethe asymptotic expected phase difference versus the noise intensity, while on the right it is shown thephase difference versus time for a specific realization of the noise process. It can be seen how noiseoperates to actively reduce the phase difference between the oscillators. We have considered networks of coupled nonlinear oscillators subject to white Gaussian noise. Wehave shown that the network can be conveniently described by stochastic differential equations. Theadvantages and disadvantages of the two most popular interpretations, i.e. Itˆo and Stratonovich,have been briefly outlined. We recall that θ i is a normalized phase variable, multiplication for ν i is necessary to retrieve the non normalized phasevariable. The case where the noise intensity ε is not equal for all the oscillators can be treated similarly. Obviously E [ R i ] = E [ R j ]and E [ R i ] = E [ R j ] and the solution of (35) would be more complicated. sing projection techniques and Itˆo calculus, we have derived a rigorous mathematical descriptionfor the network dynamics, in terms of the phases and amplitude deviations of the oscillators. Wehave shown that using Floquet theory a partial decoupling between the phase and the amplitudedynamics can be obtained. This idea leads to the development of phase reduced models analogousto the celebrated Kuramoto model.The amplitude and phase description highlights the influence of noise on the phases of the oscil-lators. It represents a good starting point for the analysis of the role of noise on synchronization.It is shown that noise can prevent phase locking of oscillators, by producing phase diffusion. It alsoshown that noise can favor synchronization. In fact the oscillators may adjust their frequency inresponse to noise intensity, and as a consequence noise can actively contribute to the synchronizationby decreasing the frequency mismatch. A simple example has been used to illustrate the derivationof the amplitude and phase equations and their analysis. Acknowledgement
This work was supported by the
Universit`a Italo Francese in the framework Galileo 2014–2015(project number G14-145).
Appendix A: Proof of theorem 1
First we show that a neighborhood of the limit cycle exists, where θ i and R i are Itˆo processes. TheJacobian matrix of the coordinate transformation (12) evaluated on the limit cycle is D h i ( θ i , R i ) (cid:12)(cid:12) R i =0 = (cid:2) r i ( θ i ) u i ( θ i ) , u i ( θ i ) , . . . , u n i ( θ i ) (cid:3) Since { u i ( t ) , . . . , u n i ( t ) } is a basis for R n , it follows that det D h i ( θ i , R i ) (cid:12)(cid:12) R i =0 = 0. Then by theinverse function theorem a neighborhood of R i = 0 exists, where h i is invertible. Moreover, if h i isof class C k then its inverse is also of class C k . Taking the inverse of h i we can write θ i = θ i ( x i ) and R i = R ( x i ), and if the basis vectors are smooth enough it follows from Itˆo formula that θ i and R i are Itˆo processes.Now we prove that θ i and R i satisfy equations (13a) and (13b). Using Itˆo formula and eq. (9), x i = h i ( θ i , R i ) implies d x i = ∂ h i ∂θ i dθ i + ∂ h i ∂ R i d R i + 12 ∂ h i ∂θ i ( dθ i ) + 12 d R Ti ∂ h i ∂ R i d R i + ∂ h i ∂θ i ∂ R i dθ i d R i = (cid:2) a i ( h i ( θ i , R i )) + ε c i ( h ( θ , R ) , . . . , h N ( θ N , R N )) (cid:3) dt + B i ( h ( θ , R ) , . . . , h N ( θ N , R N )) d W i where ∂ h i /∂ R i and ∂ h i /∂ R i are the Jacobian and the Hessian matrices, respectively. Introducing(12) yields (cid:18) a i ( x S i ) + ∂ Y i ∂θ i R i (cid:19) dθ i + Y i d R i + 12 (cid:18) ∂ a i ( x S i ) ∂θ i + ∂ Y i ∂θ i R i (cid:19) ( dθ i ) + ∂ Y i ∂θ i dθ i d R i = (cid:2) a i ( x S i + Y i R i ) + ε c i ( x S + Y R , . . . , x S N + Y N R N ) (cid:3) dt + ε B i ( x S + Y R , . . . , x S N + Y N R N ) d W i (38) ultiplying (38) to the left by v T i and using the bi–orthogonality condition we get (cid:18) r i + v T i ∂ Y i ∂θ i R i (cid:19) dθ i + 12 v T i (cid:18) ∂ a i ( x S i ) ∂θ i + ∂ Y i ∂θ i R i (cid:19) ( dθ i ) + v T i ∂ Y i ∂θ i dθ i d R i = v T i [ a i ( x S i + Y i R i ) + ε c i ( x S + Y R , . . . , x S + Y R )] dt + ε v T i B i ( x S + Y R , . . . , x S N + Y N R N ) d W i (39)Multiplying (38) to the left by Z Ti gives Z Ti ∂ Y i ∂θ i R i dθ i + d R i + 12 Z Ti (cid:18) ∂ a i ( x S i ) ∂θ i + ∂ Y i ∂θ i R (cid:19) ( dθ i ) + Z Ti ∂ Y i ∂θ i dθ i d R i = Z Ti [ a i ( x S i + Y i R i ) + ε c i ( x S + Y R , . . . , x S N + Y N R N )] dt + ε Z Ti B i ( x S + Y R , . . . , x S N + Y N R N ) d W i (40)Since θ i and R i are Itˆo processes they satisfy relations of type dθ i = α i dt + β i d W i , and d R i = γ i dt + σ i d W i , respectively. By Itˆo lemma ( dθ i ) = β i β Ti dt , and dθ i d R i = σ i β Ti dt . Introducingthese results in (39), (40) and equating terms in d W i we obtain β i = ε (cid:18) r i + v T i ∂ Y i ∂θ i R i (cid:19) − v T I B i ( x S + Y R , . . . , x S N + Y N R N ) (41a) σ i = ε Z Ti B i ( x S + Y R , . . . , x S N + Y N R N ) − Z Ti ∂ Y i ∂θ i R i β i (41b)Finally, using (41a), (41b) together with ( dθ i ) = β i β Ti dt , dθ i d R i = σ i β Ti dt in (39), (40), andrearranging the terms we get the thesis. (cid:3) Appendix B: Proof of theorem 2
First of all, we observe that by hypothesis the columns of matrix P i ( t ) are linearly independent for any t , and therefore they can be chosen as a basis for R n . Moreover, from equation (11), we have r i u i ( t ) = d x S i /dt . Therefore ν i = 0, since d x S i /dt is the solution of the variational equation associated to thestructural Floquet exponent. Furthermore, equation (15) implies P i ( t ) = Φ i ( t ) P i (0) e − D i t . Takingthe derivatives of (15) yields d Φ i dt = d P i dt e D t P − i (0) − P i ( t ) D i e D i t P − i (0)and taking into account that Φ i ( t ) is a fundamental matrix of the variational equation we obtain: d P i /dt = A i P i − P i D i and consequently d Y i /dt = A i Y i − Y i e D i . Substituting the expression of d Y i /dt in (14a), (14e) and (14f), taking the Taylor series a i ( x S i + Y i R i ) = a i ( x S i ) + A i Y i R i + . . . ,and using the bi-orthogonality condition the thesis follows. (cid:3) References [1] Y. Kuramoto,
Chemical Oscillations, Waves and Turbulence , (Springer–Verlag, Berlin), (2003).[2] A. Pikovsky, M. Rosenblum and J. Kurths,
Synchronization A universal concept in nonlinearsciences , (Cambridge University Press), (2001).
3] E. Izhikevich,
Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting
MIT Press, 2006.[4] C. W. Gardiner,
Handbook of Stochastic Methods , Springer, Berlin, 1985.[5] B. Øksendal,
Stochastic Differential Equations , Springer, New York, 2003.[6] A. Demir, A. Mehrotra and J. Roychowdhury, IEEE T Circuits–I, (5), 655 (2000).[7] F. X. Kaertner, Int J Circ Theor App, , 485 (1990).[8] K. Yoshimura, K. Arai, Phys Rev Lett, , 154101 (2008).[9] M. Bonnin and F. Corinto, IEEE T Circuits–II, (8), 2104 (2013).[10] M. Bonnin and F. Corinto, IEEE T Circuits–II, (3), 158 (2014).[11] R. Benzi, G. Parisi, A. Sutera, A., and A. Vulpiani, Tellus, (1), 10 (1982).[12] K. Wiesenfeld, and F. Moss, Nature, , 33 (1995).[13] L. Gammaitoni, P. H¨anggi, P. Jung, and F. Marchesoni, Rev Mod Phys, (1), 223 (1998).[14] S. P. Beeby, M. J. Tudor, and N. M. White, Meas Sci Technol, (2), 175 (2006).[15] L. Gammaitoni, Contemp Phys, (2), 119 (2012).[16] B. Hauschildt, N. B. Janson, A. Balanov, and E. Sch¨oll, Phys Rev E (5), 051906 (2006).[17] H. Nakao, K. Arai, and Y. Kawamura, Phys Rev Lett, (18), 184101 (2007).[18] M. Bonnin, F. Corinto, and M. Gilli, IEEE T Circuits–II, (10), 638 (2012).[19] G. S. Medvedev, Phys Lett A, , 1712 (2010).[20] J. Hespanha, Moment closure for biochemical networks, (1), 52 (2009).(1), 52 (2009).