Ergodicity for a stochastic Hodgkin-Huxley model driven by Ornstein-Uhlenbeck type input
aa r X i v : . [ m a t h . P R ] S e p Ergodicity for a stochastic Hodgkin-Huxley model driven byOrnstein-Uhlenbeck type input
R. H¨opfner ∗ E. L¨ocherbachM. Thieullen ∗ Johannes Gutenberg-Universit¨at Mainz, Universit´e de Cergy-Pontoiseand Universit´e Pierre et Marie Curie.
September 28, 2018
Abstract
We consider a model describing a neuron and the input it receives from its dendritic tree whenthis input is a random perturbation of a periodic deterministic signal, driven by an Ornstein-Uhlenbeck process. The neuron itself is modeled by a variant of the classical Hodgkin-Huxleymodel. Using the existence of an accessible point where the weak H¨ormander condition holds andthe fact that the coefficients of the system are analytic, we show that the system is non-degenerate.The existence of a Lyapunov function allows to deduce the existence of (at most a finite numberof) extremal invariant measures for the process. As a consequence, the complexity of the system isdrastically reduced in comparison with the deterministic system.
Keywords : Hodgkin-Huxley model, degenerate diffusion processes, time inhomogeneous diffusionprocesses, weak H¨ormander condition, periodic ergodicity.
AMS Classification : 60 J 60, 60 J 25, 60 H 07 ∗ This work has been supported by the Agence Nationale de la Recherche through the project MANDy, Mathemat-ical Analysis of Neuronal Dynamics, ANR-09-BLAN-0008-01. e-mail addresses: [email protected],[email protected] and [email protected] Introduction
In this paper we study a stochastic model for a spiking neuron together with the input it receives fromits dendritic tree. Our model is derived from the well-known deterministic Hodgkin-Huxley model andtakes the form of a highly degenerate time inhomogeneous stochastic system.The deterministic Hodgkin-Huxley model for the membrane potential of a neuron has been extensivelystudied over the last decades. There seems to be a large agreement (see e.g. the introduction inDestexhe 1997) that the 4-dimensional dynamical system proposed initially by Hodgkin and Huxley1952 models adequately the mechanism of spike generation in response to an external stimulus in manytypes of neurons. Hodgkin and Huxley modeled the behavior of ion channels with respect to the twoion currents which are predominant (import of Na + and export of K + ions through the membrane) in away which later was found experimentally (cf. Izhikevich 2007, Figure 2.8 on p. 33) to correspond to astructure in the voltage gated ion channels which was not yet observable in 1952. Also generalizationsof deterministic Hodgkin-Huxley models taking into account a larger number of types of ion channelshave been considered; for a modern introduction see Izhikevich 2007.A classical deterministic Hodgkin-Huxley system (see e.g. Izhikevich 2007, pp. 33 and 37–38) hasfour variables, the voltage (measured by some electrode in the soma of the neuron) and three gatingvariables (the state of specific voltage sensors which activate or deactivate ion channels). In additionthere is some fixed deterministic function of time which represents an input. Our stochastic Hodgkin-Huxley model has only one source of stochasticity: we are interested in the effect of an external noiseon the behavior of the system. Thus we replace deterministic input by the increments of a stochasticprocess whose stochastic differential equation plays the role of a fifth equation. A cortical neuronbelonging to an active cortical network receives its input from a large number of other neurons througha huge number of synapses located on a dendritic tree of complex topological structure. This is ourreason for modeling dendritic input as an autonomous diffusion process ( ξ t ) t ≥ , time inhomogeneousand of mean-reverting Ornstein-Uhlenbeck type, having some T -periodic deterministic signal t → S ( t )coded in its semigroup. We think of t → S ( t ) as a signal processed by the network: roughly speakingthe signal is present in the mean values of the diffusion process as a function of time t ≥
0. For thethree gating variables we keep the corresponding Hodgkin-Huxley equations unchanged: their activityis conditionally deterministic given the voltage, without intrinsic source of randomness. Our equationfor the voltage is keeping the traditional Hodgkin-Huxley form of the drift coefficient (a function ofthe voltage and the gating variables) but replaces the classical deterministic input by a stochasticinput dξ t at time t ≥
0. In this way, we are led to consider a 5-dimensional random dynamical2ystem ( X t ) t ≥ governed by one-dimensional Brownian motion which represents the external noise:the driving Brownian motion of the Ornstein-Uhlenbeck type SDE, present in two of its five equations.The present paper is the second part of our study of periodic ergodicity for such models. The firstpart is the companion paper H¨opfner, L¨ocherbach and Thieullen 2013 where we address the existenceof densities for strongly degenerate time inhomogeneous random models which contain the presentmodel of interest as a particular case.The first main result of the present paper (Theorem 2 in Section 2.4) shows that for our highlydegenerate and time inhomogeneous 5-dimensional stochastic system, the weak H¨ormander conditionholds at all points of the state space. As a consequence, continuous transition densities exist withrespect to the 5-dimensional Lebesgue measure, at every time t ≥ T − skeleton chain ( X kT ) k ∈ IN (the process observed at multiples of the periodicity T ) or in termsof the 6-dimensional continuous-time process ( i T ( t ) , X t ) t ≥ where i T ( t ) is t modulo T. We recall thata strong Markov process is called ‘recurrent in the sense of Harris’ if it possesses an invariant measure m such that any set A with m ( A ) > T -skeleton chain, we prove the existence of a finite number of disjoint Harris sets (moreprecisely: there is at least one Harris set, and at most a finite number) in the sense of Meyn andTweedie 1992, Theorems 2.1 and 4.5. In restriction to any of these Harris sets, the skeleton chain3s recurrent in the sense of Harris, and we have one extremal invariant measure on each Harris set.Similarly in continuous time, the 6-dimensional system ( i T ( t ) , X t ) t ≥ admits a finite number (at leastone) of disjoint invariant control sets in the sense of Arnold and Kliemann 1987; we have one extremalinvariant measure on each invariant control set, and in restriction to any of the control sets, theprocess is recurrent in the sense of Harris. The finitely many disjoint Harris and/or invariant controlsets represent a finite number of typical ‘stochastic equilibrium settings’ for the process (in a senseof invariant law, in a sense of long time behavior), in contrast to the deterministic situation whereinfinitely many equilibrium states coexist.The fact that Ornstein-Uhlenbeck diffusion has analytical coefficients comes in at several key steps ofour proofs, and control arguments together with the support theorem for diffusions (see e.g. Milletand Sanz-Sole 1994) play a main role.Approaches which view neurons as deterministic (e.g. Guckenheimer and Oliva 2002, Rubin and Wech-selberger 2007, Desroches, Guckenheimer, Krauskopf, Kuehn, Osinga and Wechselberger 2012) orclose-to-deterministic dynamical systems (e.g. Berglund and Gentz 2010, Berglund and Landon 2012)have received a lot of attention, and quite often – because of the analytical complexity of the deter-ministic Hodgkin-Huxley model – one is forced to switch to simplified systems of equations such asthe FitzHugh-Nagumo model or the Morris-Lecar model whose dynamics are tractable, at the priceof questionable biological relevance. In this approach, ‘noise’ added to the classical deterministic dy-namical system is often considered as ‘small’ in order to make the stochastic system mimic essentialfeatures of the deterministic system. In contrast to this aim, in our approach ‘noise’ – in the formof the one-dimensional Brownian motion driving the Ornstein-Uhlenbeck SDE and by means of thisthe 5-dimensional stochastic system – is strong enough to smoothen the stochastic dynamics, despitethe degeneracy of the system, by the interaction between drift and diffusion through its 5 dimensions.The Harris properties which we prove open a road which allows us to work in the restriction to Harrissets with ratio limit theorems or with limit theorems, and to deal in a genuinely stochastic way withlong-time properties of a stochastic Hodgkin-Huxley model.One of our main results proves the existence of only finitely many extremal invariant measures forthe stochastic Hodgkin-Huxley system. On the contrary to this situation, the deterministic Hodgkin-Huxley system exhibits a broad range of possible and qualitatively quite different behavior of itssolution, depending on the specific form of the input (time-constant input, time-periodic input, jumpfunctions ...), and depending on the starting point. Desired periodic behavior (which resembles tospiking patterns observed in neurons) appears only in special situations. Rinzel and Miller 19804pecified some interval I such that a time-constant input c ∈ I results in periodic behavior of thesolution. Aihara, Matsumoto and Ikegaya 1984 determined some interval J such that an oscillatinginput t → S ( f t ) with frequencies f ∈ J (for some given continuous 1-periodic function S ) yieldsperiodic behavior of the solution. Periodic behavior includes cases where one period of the output isequal to some multiple of the period of the input. Both papers also specify intervals e I and e J such thata time-constant input c ∈ e I or an oscillating input at frequency f ∈ e J leads to a chaotic behavior ofthe solution. The intricate structure of the tableau of possible types of behavior (with ‘modern’ modelconstants as given in Izhikevich 2007 p. 37–38 to be used below, slightly different from Hodgkin andHuxley’s original ones) was checked by numerical calculations in Endler 2012, Chapter 2, who obtainedinteresting schemes of classification. In contrast to the deterministic situation, our results show that‘noise smoothens the tableau’ and simplifies it in an essential way: In our stochastic Hodgkin-Huxleymodel, the finite number of Harris sets for the skeleton chain ( X kT ) k ∈ IN or the finite number ofinvariant control sets for the 6-dimensional continuous-time process ( i T ( t ) , X t ) t ≥ corresponds to afinite number of possibilities for ‘typical’ long time behavior.Our paper is organized as follows. We present the deterministic Hodgkin-Huxley system and ourstochastic model in Sections 2.1–2.2. Sections 2.3–2.4 focus on the weak H¨ormander condition. First,Theorem 1 formulates a sufficient condition (considering Lie brackets of some fixed order) for validityof the weak H¨ormander condition: on the state space of the 5-dimensional stochastic Hodgkin-Huxleysystem, this condition holds up to at most an exceptional set of Lebesgue measure zero. Theorem2 then strengthens this and proves (via a control argument) that in fact the exceptional set is voidand the weak H¨ormander condition holds everywhere. This is our first major result for the stochasticHodgkin-Huxley system. As a consequence, Corollary 1 states continuity properties of Lebesgue den-sities of transition probabilities. Section 2.5 deals with ergodicity properties of the system. Thanksto a Lyapunov function we show that some large compact set is visited infinitely often. Then, usingNummelin splitting based on the results of Corollary 1, we can cover the compact set with a finitenumber of balls of a certain type which induce renewal times in the sense of Nummelin 1978. Theorem3 then establishes Harris recurrence in restriction to a finite number of Harris sets for the skeletonchain ( X kT ) k ∈ IN . Theorem 4 formulates this for the continuous-time process ( i T ( t ) , X t ) t ≥ in restric-tion to invariant control sets. Longer proofs are shifted to the Sections 3–6: Section 3 calculates Liebrackets, Sections 4 and 5 work with control systems and the support theorem to check which partsof the state space are attainable for the stochastic Hodgkin-Huxley system. Finally, Section 6 dealswith invariant control sets for the process ( i T ( t ) , X t ) t ≥ in order to establish the link between invari-ant measures for the 5-dimensional skeleton chain ( X kT ) k ∈ IN and the 6-dimensional continuous-time5rocess ( i T ( t ) , X t ) t ≥ . We consider a neuron modeled by a Hodgkin-Huxley system which receives a periodic input S from itsdendritic system. The input is random and, as a function of time, modeled by a time inhomogeneousdiffusion of mean reverting type, as argued by H¨opfner 2007. We start by recalling briefly the classicaldeterministic Hodgkin-Huxley model. T -periodic input The classical Hodgkin-Huxley model is a 4-dimensional ordinary differential equation. The first vari-able V represents the membrane potential, while the other three variables n, m and h are related tothe proportion of different types of open ion channels, which allow sodium or potassium ions to enteror to leave the neuron.Let t → S ( t ) be a T -periodic deterministic signal. The Hodgkin-Huxley equations with input S ( t ) are(HH) dV t = S ( t ) dt − (cid:2) g K n t ( V t − E K ) + g Na m t h t ( V t − E Na ) + g L ( V t − E L ) (cid:3) dtdn t = [ α n ( V t ) (1 − n t ) − β n ( V t ) n t ] dtdm t = [ α m ( V t ) (1 − m t ) − β m ( V t ) m t ] dtdh t = [ α h ( V t ) (1 − h t ) − β h ( V t ) h t ] dt, where g K = 36 , g Na = 120 , g L = 0 . , E K = − , E Na = 120 , E L = 10 . , with notations and constants of Izhikevich 2009, pp. 37–38. The functions α n , β n , α m , β m , α h , β h in(HH) take values in (0 , ∞ ) and are analytic, i.e. they admit a power series representation on IR . Theyare given as follows.(1) α n ( v ) = . − . v exp(1 − . v ) − , β n ( v ) = 0 .
125 exp( − v/ ,α m ( v ) = . − . v exp(2 . − . v ) − , β m ( v ) = 4 exp( − v/ ,α h ( v ) = 0 .
07 exp( − v/ , β h ( v ) = − . v )+1 . For v ∈ IR, let(2) n ∞ ( v ) := α n α n + β n ( v ) , m ∞ ( v ) := α m α m + β m ( v ) , h ∞ ( v ) := α h α h + β h ( v ) .
6f we think of keeping the variable V constant in (HH), then these are equilibrium values in (0 ,
1) forthe variables n , m , h when V ≡ v ∈ IR .We write E := IR × [0 , for the state space of ( V, n, m, h ), with points ( v, n, m, h )(see Proposition 1 below for a proof of the fact that the system stays in E whenever it starts there).We use the notation F : E → IR for drift terms not related to the signal in the first equation of (HH): F ( v, n, m, h ) := g K n ( v − E K ) + g Na m h ( v − E Na ) + g L ( v − E L )= 36 n ( v + 12) + 120 m h ( v − . v − . . (3)Define from (3) a function F ∞ : IR → IR by(4) F ∞ ( v ) := F ( v, n ∞ ( v ) , m ∞ ( v ) , h ∞ ( v )) , v ∈ IR .
In particular, if we select c ∈ IR such that c = F ∞ ( v ) , then(5) ( v, n ∞ ( v ) , m ∞ ( v ) , h ∞ ( v )) ∈ E is an equilibrium point for the deterministic system (HH) with constant signal S ( · ) ≡ c. Example 1
It is well known that for sufficiently large values of a constant signal S ( · ) = c , thedeterministic system (HH) exhibits regular spiking (see Rinzel and Miller 1980, for the model constantsused here see Endler 2012, Section 2.1, in particular Figure 2.6). This means that for such values of c , the equilibrium point (5) is unstable, and that there is a stable orbit for the -dimensional system ( V, n, m, h ) of ‘biological variables’. T -periodic diffusions and HH system with stochasticinput From now on we suppose that the T -periodic signal t → S ( t ) of Subsection 2.1 is an analytic function.We consider a stochastic Hodgkin-Huxley system which receives this signal from its dendritic systemas random input. This random input is modeled by the following diffusion(6) dξ t = ( S ( t ) − ξ t ) τ dt + γ √ τ dW t , where we have chosen a parametrization in terms of τ > γ > ξ is a time inhomogenousOrnstein-Uhlenbeck type diffusion which carries the signal S .7 emark 1 We have an explicit representation ξ t = xe − τ ( t − s ) + Z ts e − τ ( t − v ) (cid:0) τ S ( v ) dv + γ √ τ dW v (cid:1) , t ≥ s, for the process starting at time s in x. Introducing the function s → M ( s ) = R ∞ S ( s − rτ ) e − r dr, theinvariant law π of the skeleton chain ( ξ kT ) k ∈ IN is π = N ( M (0) , γ and the law of ξ s starting at time t = 0 from ξ ∼ π is L π, ( ξ s ) = N ( M ( s ) , γ (cf. H¨opfner and Kutoyants 2010, Ex. 2.3). Hence the T − periodic signal S ( · ) is expressed in theprocess ξ under the ‘periodically invariant’ regime in the form of moving averages s → E π, ( ξ s ) = M ( s ) = Z ∞ S ( s − rτ ) e − r dr which are T − periodic. For large values of τ , M ( · ) is close to S ( · ) . Consider now the HH equations driven by stochastic input dξ t , i.e. the 5-dimensional system( ξ HH) dV t = dξ t − (cid:2) g K n t ( V t − E K ) + g Na m t h t ( V t − E Na ) + g L ( V t − E L ) (cid:3) dtdn t = [ α n ( V t ) (1 − n t ) − β n ( V t ) n t ] dtdm t = [ α m ( V t ) (1 − m t ) − β m ( V t ) m t ] dtdh t = [ α h ( V t ) (1 − h t ) − β h ( V t ) h t ] dtdξ t = ( S ( t ) − ξ t ) τ dt + γ √ τ dW t . Write X = ( X t ) t ≥ , X t = ( V t , n t , m t , h t , ξ t ) , for the solution of ( ξ HH) (we show in Proposition 1 belowthe existence of a unique strong solution), E = IR × [0 , × IR for the corresponding state space, anddenote the elements of E by x = ( v, n, m, h, ζ ). We write P x for the probability measure under whichthe solution X = ( X t ) t ≥ of ( ξ HH) starts from x. Let ( P s ,s ( x , dx ) ) ≤ s
For any x ∈ E , there exists a unique strong non-exploding solution X to ( ξ HH) starting from x at time , taking values in E . roof. By our assumptions, a strong solution ξ t of (6) exists. Moreover, the coefficients of V and n, m, h are locally Lipschitz continuous. This implies the existence of a unique strong solution of thesystem ( ξ HH) which is a maximal solution, i.e. it exists up to some explosion time. So we have toprove that the process does not explode. By assumption, ξ t does not explode. Consider now theunique solution ( V t , n t , m t , h t , ξ t ) of ( ξ HH) on [0 , T ∞ [ , where T ∞ is the associated explosion time. It iseasy to show that n, m and h stay in [0 , , whenever they start in [0 , , see for instance Proposition1 of H¨opfner, L¨ocherbach and Thieullen 2013.Replacing n, m and h by their maximal value 1 , we obtain easily from ( ξ HH) that(7) | V t | ≤ | ξ t | + C Z t | V s | ds + C t, t < T ∞ , where C and C are suitable constants. This implies, using Gronwall’s inequality and the non-explosion of ξ t , that V t does not explode either. Hence T ∞ = ∞ almost surely and the above estimateshold on [0 , ∞ [ . (cid:3) Our system ( ξ HH) is a 5-dimensional diffusion driven by one-dimensional Brownian motion. As aconsequence, the only possibility for guaranteeing non-degeneracy of the system is that the system‘feels the noise via the drift’. In other words, we have to check whether the weak H¨ormander conditionholds. Since the drift term of ( ξ HH) depends on time, we add time as a first coordinate to our system.More precisely, we write TT := [0 , T ] for the torus and identify t with i T ( t ) := t mod T. Elements of[0 , TT ] × E will be denoted either by ( x , x , . . . , x ) or by ( t, x ) or by ( t, v, n, m, h, ζ ) . Working withthe space-time process ¯ X t = ( i T ( t ) , X t ) , the associated drift and diffusion coefficients are the vectorfields(8) ¯ b ( t, x ) = b ( t, x )... b ( t, x ) ∈ IR and ¯ σ ( t, x ) = γ √ τ ∈ IR , where for x = ( v, n, m, h, ζ ) ,b ( t, x ) = ( S ( t ) − ζ ) τ − F ( v, n, m, h ) , b ( t, x ) = α n ( v ) (1 − n ) − β n ( v ) n, ( t, x ) = α m ( v ) (1 − m ) − β m ( v ) m, b ( t, x ) = α h ( v ) (1 − h ) − β h ( v ) h, b ( t, x ) = ( S ( t ) − ζ ) τ. We identify ¯ b ( t, x ) and ¯ σ ( t, x ) with differential operators¯ b ( t, x ) = ∂∂t + X i =1 b i ( t, x ) ∂∂x i and ¯ σ ( t, x ) = X i =1 σ i ( t, x ) ∂∂x i . We are now going to introduce the successive Lie brackets that we use in the sequel. First, recall thatfor vector fields f ( t, x ) and g ( t, x ) : TT × E → IR , the Lie bracket [ f, g ] is defined by[ f, g ] i = X j =0 (cid:18) f j ∂g i ∂x j − g j ∂f i ∂x j (cid:19) , i = 0 , . . . , , with superscript ‘ i ’ for the i -th component. In this way, for a vector field f : TT × E → IR whose‘0-component’ equals 0, the Lie bracket [¯ b, f ] takes the form[¯ b, f ] = 0 , [¯ b, f ] i = ∂f i ∂t + X j =1 (cid:18) b j ∂f i ∂x j − f j ∂b i ∂x j (cid:19) , i = 1 , ..., , and the Lie bracket [¯ σ, f ] takes the form[¯ σ, f ] = 0 , [¯ σ, f ] i = γ √ τ (cid:18) ∂f i ∂x + ∂f i ∂x (cid:19) , i = 1 , ..., . We introduce the following system of sets of vector fields based on iterated Lie brackets.
Definition 1
Define a set L of vector fields by the ‘initial condition’ ¯ σ ∈ L and an arbitrary numberof iterations steps (9) L ∈ L = ⇒ [¯ b, L ] , [¯ σ, L ] ∈ L . For N ∈ IN , define the subset L N by the same initial condition and at most N iterations (9). Write L ∗ N for the closure of L N under Lie brackets; finally, write ∆ L ∗ N := LA( L N ) for the linear hull of L ∗ N , i.e. the Lie algebra spanned by L N . Note that all elements of L ∗ N have ‘0-component’ equal to zero, so 5 is an obvious upper bound fordim(∆ L ∗ N ) . efinition 2 We say that a point x ∗ ∈ E is of full weak H¨ormander dimension if there is some N ∈ IN such that (10) (dim ∆ L ∗ N )( s, x ∗ ) = 5 independently of s ∈ TT .
We put I := { x = ( v, n, m, h, ζ ) ∈ E : x is of full weak H¨ormander dimension } . Remark 2
Notice that in the iteration step (9), it is allowed to build Lie brackets using the driftvector ¯ b ( t, x ) . It is for this reason that the above condition is called ‘weak’ in contrast to the ‘strong’H¨ormander condition. In the strong H¨ormander condition, only iterations using the column vectors ofthe diffusion matrix are allowed. Since in our case the diffusion matrix is built of only one column, itis clear that the strong H¨ormander condition can never hold.
In the following we are going to state a sufficient condition ensuring that a given point belongs to I . In order to do so, let(11) D ( v, n, m, h ) := det ∂ v b ∂ v b ∂ v b ∂ v b ∂ v b ∂ v b ∂ v b ∂ v b ∂ v b ( v, n, m, h ) , ( v, n, m, h ) ∈ E , where ∂ kv denotes the k − fold partial derivative with respect to v , and introduce O := { ( v, n, m, h ) ∈ E : D ( v, n, m, h ) = 0 } . We quote the following proposition from H¨opfner, L¨ocherbach and Thieullen 2013.
Proposition 2 (Proposition 9 of [12]) O is an open set of full Lebesgue measure, i.e. λ ( O c ) = 0 . Calculating the first four Lie brackets of our system by using successively first the drift vector andthen three times the diffusion coefficient, we obtain the following theorem.
Theorem 1
All points x = ( v, n, m, h, ζ ) in E whose first four components belong to O are pointssatisfying the weak H¨ormander condition. The proof of Theorem 1 is given in Section 3. Notice that D ( v, n, m, h ) does not depend on time. emark 3 We resume the numerical study of Section 5.4 in H¨opfner, L¨ocherbach and Thieullen 2013.First of all, the set O is certainly non-empty since we find a strictly negative value of the determinante.g. at the equilibrium point (0 , n ∞ (0) , m ∞ (0) , h ∞ (0)) of the 4d deterministic system (HH). In orderto obtain more information about O , we calculate ( ∗ ) v −→ D ( v, n ∞ ( v ) , m ∞ ( v ) , h ∞ ( v )) at equilibrium points of (HH) which correspond to constant input S ( · ) ≡ c . By equations (4) and(5), equilibrium requires c := F ∞ ( v ) . Since v → F ∞ ( v ) is strictly increasing, there is a one-to-onecorrespondence between values v of the membrane potential and values c of the input: as an example,to v ∈ {− , , } correspond c ∈ {− . , − . , . } .Calculating the function ( ∗ ) when v ranges over the interval ( − , , we find zeros at the two points v = − . and v = 10 . (numerical values rounded to 2 decimals). The function ( ∗ ) is strictlynegative in between, and strictly positive outside. In particular, equilibrium points of (HH) underconstant input − . ≤ c ≤ . belong to the set O . Let y ∗ = (0 , n ∞ (0) , m ∞ (0) , h ∞ (0)) be the equilibrium point for the deterministic system (HH) drivenby constant input c = F ∞ (0) ≈ − . . By Remark 3 above we know that y ∗ ∈ O . In this sectionwe will show that for any neighborhood U of y ∗ , the set U × IR is accessible. Since the coefficients ofthe system are analytic, this will imply that the weak H¨ormander condition holds on the whole statespace E . We start with the following proposition which is due to discussions with Michel Bena¨ım, see alsoBena¨ım, Le Borgne, Malrieu, Zitt 2012. It shows that, starting from any initial point, our system canreach U × IR for any open neighborhood U of y ∗ . Proposition 3
Let U ⊂ E be a neighborhood of y ∗ . Then for all x ∈ E , there exists t such that forall t ≥ t P ,t ( x, U × IR ) > . In particular, for the T − skeleton chain ( X kT ) k ≥ it holds that for all x ∈ E there exists k ≥ suchthat P x ( X kT ∈ U × IR ) > . Theorem 2
The weak H¨ormander condition holds on E , i.e. I = E . Proof.
The proof uses the following fact. For any diffusion process having analytic coefficients, thefollowing holds true:(12) if X t / ∈ I , then X t + s / ∈ I for all s ≥ x ∈ E \ I . We will apply Proposition 3with this fixed starting point. Since y ∗ ∈ O , we may choose a neighborhood U of y ∗ sufficiently smallsuch that U ⊂ O . Since U × IR ⊂ O × IR ⊂ I , Proposition 3 then implies that there exists t ∗ suchthat for the fixed x ∈ E \ I , P x ( X t ∗ ∈ I ) ≥ P x ( X t ∗ ∈ U × IR ) > . But, applying (12), we have P x ( X t ∗ ∈ I ) = 0 , since x / ∈ I . This is a contradiction. (cid:3)
Once the weak H¨ormander condition holds on E , it follows that the process possesses Lebesguedensities. Corollary 1
For ≤ s < s < ∞ , consider the process X starting at time s ≥ from arbitrary x ∈ E . Then the law P s ,s ( x, · ) admits a Lebesgue density p s ,s ( x, y ) . For fixed x, p s ,s ( x, y ) iscontinuous in y, uniformly in x. Moreover, for any fixed x ′ ∈ E , the map x → p s ,s ( x, x ′ ) is lowersemi-continuous. Proof.
The weak H¨ormander condition holds everywhere. If the coefficients of our system where C ∞ b and time homogeneous, then classical results as presented e.g. in Kusuoka and Stroock 1985, Corollary(3.25), or in Nualart 1995, Theorem 2.3.3, would allow us to conclude. However, the coefficients ofour system are not C ∞ b and they are time inhomogeneous. But treating time as a first coordinateand using a localization argument allows to prove the assertion. The proof of Theorem 1 in H¨opfner,L¨ocherbach and Thieullen 2013 gives the details. (cid:3) .5 Ergodicity of the stochastic Hodgkin-Huxley system We start by showing, using Lyapunov functions, that almost surely the system comes back to a compactset infinitely often. We are working with the T − skeleton ( X kT ) k ≥ , where T is the periodicity of theunderlying signal S. Proposition 4
1. There exists a compact set K ⊂ E such that for all x ∈ E , P x − almost surely, ∞ X k =0 K ( X kT ) = ∞ .
2. There exist an integer N ≥ , ε > , . . . , ε N > , x , . . . , x N ∈ K and y , . . . , y N ∈ E such that K is covered by B ε ( x ) , . . . , B ε N ( x N ) and such that for any ≤ i ≤ N, inf x ′ ∈ B εi ( x i ) ,y ′ ∈ B εi ( y i ) p ,T ( x ′ , y ′ ) > . Proof.
Let Φ : E → [1 , ∞ [ be a C − function satisfying Φ( x ) = | x | +( x ) for all x such that | x | ≥ , Φ( x ) arbitrary elsewhere. Write L t for the generator of ( ξ HH) at fixed time t. Since 0 ≤ x , x , x ≤ , it is easy to see that(13) L t Φ( x ) ≤ − c Φ( x ) + c ˜ K ( x ) , where c , c are positive constants and where ˜ K = { x ∈ E : | x | ≤ C, | x | ≤ C } is a compact subsetof E . Applying Itˆo’s formula to e c t Φ( x ) and using localization with inf { t : Φ( X t ) ≥ m } as m → ∞ , we obtain E x Φ( X t ) ≤ e − c t Φ( x ) + c c for all t > . Let now t = T, where T is the period of the underlying signal. Thus P ,T Φ( x ) − Φ( x ) ≤ − (1 − e − c T )Φ( x ) + c c . In particular, there exists a constant C , such that(14) P ,T Φ( x ) − Φ( x ) ≤ − ε for all x with | x | > C or | x | > C , for some fixed ε > . By Theorem 4.3 of Meyn and Tweedie 1992, we know that (14) implies thefollowing statement: Starting from any point in E , the skeleton chain ( X kT ) k ∈ IN visits the compactset K = { x ∈ E : | x | ≤ C , | x | ≤ C } infinitely often. This proves the first assertion of theproposition. 14oncerning the second assertion, observe that for any x ∈ K there exists y ∈ E such that p ,T ( x, y ) > . By continuity in y and lower semi-continuity in x, this can be extended to small balls around x and y. As a consequence, for any x ∈ K there exist y and ε > x ′ ∈ B ε ( x ) ,y ′ ∈ B ε ( y ) p ,T ( x ′ , y ′ ) > . Hence the compact set K is covered by a finite number of such balls B ε ( x ) , . . . , B ε N ( x N ) , withassociated points y , . . . , y N . This shows the second assertion of the proposition. (cid:3)
The lower bound (15) can be rewritten as follows. For any 1 ≤ k ≤ N, (16) P ,T ( x, dy ) ≥ β k B εk ( x k ) ( x ) ν k ( dy ) , where β k = λ ( B ε k ( y k )) · inf x ′ ∈ B εk ( x k ) ,y ′ ∈ B εk ( y k ) p ,T ( x ′ , y ′ ) and ν k = 1 λ ( B ε k ( y k )) λ | B εk ( y k ) . Using Nummelin splitting (see e.g. Nummelin 1978), this implies the following.
Theorem 3 a) The T − skeleton chain ( X kT ) k ≥ possesses ergodic invariant measures.b) Any invariant measure for the T − skeleton chain admits a continuous density with respect toLebesgue measure on E .c) The T − skeleton chain admits at most a finite number of extremal invariant measures living ondisjoint Harris subsets of E . Proof.
Parts a) and c) are essentially Meyn and Tweedie 1992, decomposition Theorem 2.1 andTheorem 4.5; note that our skeleton chain satisfies the assumptions of both theorems, due to assertion2 in our Corollary 1.a) Let ν denote any probability measure on E . Starting from ν , P ν -a.s., the skeleton chain visitsthe compact set K infinitely often. As a consequence, for P ν -almost all ω , there exists (at least one)index k = k ( ω ) ∈ { , . . . , N } such that the ω -path ( X nT ( ω )) n ≥ of the skeleton chain visits B ε k ( x k )infinitely often. Let A k denote the set of all paths which visit B ε k ( x k ) infinitely often, 1 ≤ k ≤ N .Let ( U n ) n be an i.i.d. sequence of uniform U (0 , − random variables, independent of the process.By means of these, we can introduce a sequence of regeneration times (1 + R ( k ) n ) n ≥ associated tosuccessive visits of B ε k ( x k ) in the following way: R ( k ) n +1 = inf { l > R ( k ) n : X lT ∈ B ε k ( x k ) , U l ≤ β k } , n ≥ , R ( k )0 ≡ β k is from the lower bound (16). For every k ∈ { , . . . , N } , the R ( k ) n are finite on A k for all n ,and satisfy R ( k ) n ↑ ∞ on A k as n → ∞ .There is at least one k ∈ { , . . . , N } such that A k has positive P ν -measure. We suppose without lossof generality that k = 1. By the lower bound (16), P ,T ( x, dy ) ≥ β B ε ( x ) ( x ) ν ( dy ) , and using theBorel-Cantelli lemma (see also Lemma 1.1 of Meyn and Tweedie 1992), any path belonging to A also visits B ε ( y ) infinitely often. Recall that ν is the uniform measure on B ε ( y ). By Nummelinsplitting with minorization according to (16) and regeneration times (1 + R (1) n ) n , P ν ( A ) > P ν (cid:16) R (1)1 < ∞ (cid:17) = 1as a consequence of the Markov property at times (1 + R (1) n ) n and the Borel-Cantelli lemma.Similarly, for 1 ≤ k ≤ N , call B ε k ( y k ) a ‘good’ set if P ν k ( R ( k )1 < ∞ ) = 1 . At least one such‘good’ set exists, namely B ε ( y ) in the notation above. Notice that being a ‘good’ set is a propertywhich only depends on the whole ball B ε k ( y k ) and the semigroup of the process. Rearranging thenumbering, we find some maximal subset { , . . . , N } of { , . . . , N } , 1 ≤ N ≤ N , with the propertythat P ν k ( R ( k ′ )1 < ∞ ) = 0 whenever k = k ′ in { , . . . , N } . Stated equivalently, this rearrangementinduces a partition A ˙ ∪ A ˙ ∪ . . . ˙ ∪ A N of the path space up to some remaining set of paths which has P ν -measure zero for every initial law ν . Next, we define τ := R (1)1 ∧ R (2)1 ∧ . . . ∧ R ( N )1 which is P ν -almost surely finite for every initial law ν . By Nummelin splitting and the strong law oflarge numbers,(17) µ ( f ) := N X k =1 E ν { X τT ∈ B εk ( x k ) } E X τT R ( k ) n +1 X l = R ( k ) n +1 f ( X lT ) is an invariant measure of the skeleton chain, combining in an ‘adaptive’ way the relevant A k ’s fromthe above partition. Note that this formula extends the usual form of the invariant measure to thecase where several balls are present in the lower bound (16).Now define for any 1 ≤ k ≤ N the measure µ k ( f ) := E ν k R ( k )1 X l =0 f ( X lT ) and let H k ⊂ E be the support of µ k , ≤ k ≤ N . Then for any initial measure ν concentrated on H k , sets A with µ k ( A ) > P ν -almost surely.16) Any invariant measure µ is absolutely continuous with respect to Lebesgue measure, thanks toCorollary 1, with Lebesgue density R µ ( dz ) p ,T ( z, y ) . Moreover, since the continuity of p ,T ( z, y ) in y is uniform in z, the corresponding Lebesgue density is continuous by dominated convergence.c) In order to achieve the proof of Theorem 3, we have to show that the skeleton chain possesses onlya finite number of extremal invariant probability measures which have supports given by disjointsubsets of E . We prove this in Section 6 below. From the structure of the above Lyapunov condition(14) and boundedness of P ,T Φ on K according to (7), we deduce that in the restriction to a Harrisset, recurrence is necessarily positive recurrence. (cid:3) Recall that i T ( t ) denotes t mod T and that TT = [0 , T ] is the torus. We get the following corollary ofthe above theorem. Theorem 4
Under our assumptions, the process ( i T ( t ) , X t ) t ≥ admits at most a finite number ofextremal invariant measures living on disjoint Harris subsets of TT × E . Proof:
See again Section 6.
Remark 4
By Arnold and Kliemann 1987 we know that the support of any extremal invariant mea-sure of the space-time process ( i T ( t ) , X t ) t ≥ is given by an invariant control set of the associateddeterministic control system (see Section 6 below for a precise definition, see also Colonius and Klie-mann 1993). Hence the number of invariant control sets of the associated deterministic control systemgives an a priori upper bound on the number of extremal invariant measures (and thus the number ofHarris sets) of ( i T ( t ) , X t ) t ≥ . Let ¯ X t = ( i T ( t ) , V t , n t , m t , h t , ξ t ) be the diffusion process of ( ξ HH) to which we have added time asfirst coordinate, with state space TT × E . Recall the exact form of ¯ b ( t, x ) and ¯ σ ( t, x ) given in (8).By the structure of the diffusion coefficient, the equation is already written in the Stratonovich sense.We start by calculating the Lie-bracket of ¯ σ and ¯ b where we recall that we are working on TT × IR , b, ¯ σ ] = γ √ τ ∂ v F − ∂ v b − ∂ v b − ∂ v b + γτ / . In the same way, we obtain[¯ σ, [¯ b, ¯ σ ]] = γ τ ∂ v F − ∂ v b − ∂ v b − ∂ v b and [¯ σ, [¯ σ, [¯ b, ¯ σ ]]] = γ τ / ∂ v F ( v, n, m, h ) − ∂ v b − ∂ v b − ∂ v b . We obtain an analogous formula for [¯ σ, [¯ σ, [¯ σ, [¯ b, ¯ σ ]]]] , where fourth derivatives with respect to v appear.Now we are able to conclude our proof. By definition of F in (3), ∂ v F ( v, n, m, h ) = 0 for all( v, n, m, h ) ∈ E and ∂ kv F ( v, n, m, h ) ≡ k ≥ . Notice that the above vectors all have the firstcoordinate corresponding to time which equals zero. Hence we may identify them with elements of IR . Doing so, without changing notations, we have for all fixed x ∈ E , det | | | | | ¯ σ [¯ b, ¯ σ ] [¯ σ, [¯ b, ¯ σ ]] [¯ σ, [¯ σ, [¯ b, ¯ σ ]]] [¯ σ, [¯ σ, [¯ σ, [¯ b, ¯ σ ]]]] | | | | | = 0if and only if det ∂ v F − ∂ v b − ∂ v b − ∂ v b − ∂ v b − ∂ v b − ∂ v b − ∂ v b − ∂ v b − ∂ v b − ∂ v b − ∂ v b − ∂ v b = 0 . Developing this determinant first with respect to the last line and then with respect to the first line ofthe remaining sub-determinant, this last determinant is different from zero if and only if D ( v, n, m, h ) =0 (recall the definition of D ( v, n, m, h ) in (11)). As a consequence, ¯ σ, [¯ b, ¯ σ ] , . . . , [¯ σ, [¯ σ, [¯ σ, [¯ b, ¯ σ ]]]] span IR for all x ∈ E such that D ( v, n, m, h ) = 0 , and therefore, the weak H¨ormander condition is satisfiedon O ×
IR. (cid:3) Proof of Proposition 3.
We consider the system ( ξ HH) driven by S of Section 2.2,(18) X s = x + Z s σ ( X u ) dW u + Z s b ( u, X u ) du, s ≤ t, where(19) b ( t, x ) = b ( t, x )... b ( t, x ) and σ ( x ) = γ √ τ ∈ IR . We write C = C ([0 , ∞ [ , IR ) for the space of continuous functions and endow C with its canonicalfiltration ( F t ) t ≥ . Let P ,x be the law of ( X u , u ≥
0) on C , starting from x at time 0 . With y ∗ and U as in Proposition 3, we wish to find lower bounds for quantities of the form P ,x ( B ) where B = { f ∈ C : f ( t ) ∈ U × IR } ∈ F t . In order to do so, we will use control arguments and the supporttheorem for diffusions. We need first to localize the system. Let K n = [ − n, n ] × [0 , × [ − n, n ] ⊂ E and let T n = inf { t : X t ∈ K cn } be the exit time of K n . For a fixed n, let b n ( t, x ) and σ n ( x ) be C ∞ b − extensions in x of b ( t, · | K n ) and σ | K n . Let X n be the associated diffusion process. For any fixed n < n and any starting point x ∈ K n , we write P n ,x for the law of ( X nu , u ≥
0) on C . Then for any t > B ∈ F t , (20) P ,x ( B ) ≥ P ,x ( { f ∈ B ; T n > t } ) = P n ,x ( { f ∈ B ; T n > t } ) . It suffices to show that this last expression is strictly positive, for the given set B, for any fixed x ∈ K n . For this sake we will use the support theorem for diffusions of Stroock and Varadhan 1972.Let H = { h : [0 , t ] → IR : h ( s ) = R s ˙ h ( u ) du, ∀ s ≤ t, R t ˙ h ( u ) du < ∞} be the Cameron-Martin space.Given h ∈ H , consider X ( h ) the solution of the differential equation(21) X ( h ) s = x + Z s σ n ( X ( h ) u ) ˙ h ( u ) du + Z s b n ( u, X ( h ) u ) du, s ≤ t, where X ( h ) is of the form X ( h ) = ( X ( h ) , X ( h ) , X ( h ) , X ( h ) , X ( h ) ). Notice that there is nodifference between the Itˆo- and Stratonovich-form thanks to the specific structure of the diffusioncoefficient in our case.The support theorem in its classical form is stated for diffusions whose parameters are homogeneousin time. In order to fit into this framework, we replace as before the 5 − dimensional process X n by a19 − dimensional process ( t, X nt ) which is now a classical time-homogenous diffusion process. This showsthat the support theorem applies directly also in the time inhomogeneous case. As a consequence, seee.g. Theorem 3.5 of Millet and Sanz-Sol´e 1994 or Theorem 4 of Ben Arous, Gradinaru and Ledoux1994, the support of the law P n ,x restricted to F t is the closure of the set { X ( h ) : h ∈ H} with respectto the uniform norm on C ([0 , t ] , IR ) . In order to find lower bounds for (20) we have to construct solutions X ( h ) of (21) which stay in K n during [0 , t ] . On K n , both processes X n and X have the same coefficients. Hence, by restricting to K n , the above control problem (21) is equivalent to(HHcontrolled) dds X ( h ) s = dds X ( h ) s − F ( X ( h ) s , X ( h ) s , X ( h ) s , X ( h ) s ) dds X ( h ) s = α n ( X ( h ) s ) (1 − X ( h ) s ) − β n ( X ( h ) s ) X ( h ) sdds X ( h ) s = α m ( X ( h ) s ) (1 − X ( h ) s ) − β m ( X ( h ) s ) X ( h ) sdds X ( h ) s = α h ( X ( h ) s ) (1 − X ( h ) s ) − β h ( X ( h ) s ) X ( h ) sdds X ( h ) s = ( S ( s ) − X ( h ) s ) τ + γ √ τ ˙ h ( s ) . We construct an explicit solution of (HHcontrolled) starting from the fixed initial condition x =( v, n, m, h, ζ ) ∈ K n at time 0 in the following way. First, we choose a path ¯ v t = γ ( t ) v, t ≥ , goingfrom v to 0 . Here, γ is a smooth function IR + → [0 , , γ (0) = 1 , γ ( t ) = 0 for all t ≥ . Hence for all t ≥ , ¯ v t ≡ , and ¯ v = v. Then, solving the equations for n, m and h explicitly, for this fixed choice of ¯ v t , we obtain¯ n t = ne − R t a n (¯ v s ) ds + Z t b n (¯ v u ) e − R tu a n (¯ v r ) dr du, where a n = α n + β n , b n = α n . We have analogous representations for ¯ m t and ¯ h t . Since ¯ v t ≡ t ≥ , it follows that(22) | ¯ n t +1 − n ∞ (0) | ≤ Ce − ta n (0) , where the constant depends on v and n. The same convergence result holds for ¯ m t and ¯ h t . Fix ε such that B ε ( y ∗ ) ⊂ U. Then there exists t such that for all t ≥ t , (23) (¯ n t , ¯ m t , ¯ h t ) ∈ B ε/ ( n ∞ (0) , m ∞ (0) , h ∞ (0)) . Now we want to choose h such that(24) dds X ( h ) s = dds ¯ v s + F (¯ v s , ¯ n s , ¯ m s , ¯ h s ) , for all s ≥ . X ( h ) s = ζ + ¯ v s − v + Z s F (¯ v u , ¯ n u , ¯ m u , ¯ h u ) du =: J h s = J h s ( v, n, m, h, ζ ) . Hence, if we define ˙ h ( s ) := dds ¯ v s + F (¯ v s , ¯ n s , ¯ m s , ¯ h s ) + ( J h s − S ( s )) τγ √ τ , then (¯ v s , ¯ n s , ¯ m s , ¯ h s , X ( h ) s ) T is indeed a solution of (HHcontrolled), for this specific choice of h . Fix t ≥ t . Notice that ˙ h is well-defined and that ˙ h ∈ L ([0 , t ]) , hence h ∈ H . With this choiceof h , the first four lines of (HHcontrolled) reduce to the deterministic system (HH) with input signal s → dds ¯ v s + F (¯ v s , ¯ n s , ¯ m s , ¯ h s ) . Write Y for the associated deterministic solution starting from ( v, n, m, h )at time 0 and X xs = ( Y s , J h s ) , s ≤ t, starting from x at time 0 . For n sufficiently large, X xs ∈ K n for all s ≤ t. By the support theorem, for every δ > , putting B ∞ δ ( X x ) = { f ∈ C : sup s ≤ t | f ( s ) − X xs | < δ } , we have that P n ,x ( B ∞ δ ( X x )) > . Now, choose δ ≤ ε/ n sufficiently large such that B ∞ δ ( X x ) ⊂ { f ∈ C : T n ( f ) > t } . Since B ε ( y ∗ ) ⊂ U and recalling (23), we have that B ∞ δ ( X x ) ⊂ { f ∈ C : f ( t ) ∈ B ε ( y ∗ ) × IR } . This implies P x ( X t ∈ B ε ( y ∗ ) × IR ) ≥ P n ,x ( B ∞ δ ( X x )) > , which finishes our proof. (cid:3) For the convenience of the reader we will recall basic concepts from Control Theory as exposed inSussmann 1973. As above, in order to be able to deal with the T − periodic drift coefficient, we workwith the space-time process ¯ X t = ( i T ( t ) , X t ) , t ≥ . Recall that the drift and diffusion coefficients ¯ b ( t, x ) and ¯ σ ( t, x ) of ¯ X have been introduced in (8). Weintroduce the following family of control vector fields(25) G = { ¯ b + c ¯ σ, c ∈ IR } . Here, by definition of G , the control parameter c acts on the diffusion part only. There is no controlon the drift part. Control vector fields from G correspond to the controls h of Section 4 in case ofpiecewise constant ˙ h . G ∗ be the smallest set of vector fields containing G which is closed under Lie brackets. We introducethe mapping ∆ G ∗ which assigns to every space-time point ( t, x ) ∈ TT × E the linear subspace∆ G ∗ ( t, x ) = Span { V ∗ ( t, x ) : V ∗ ∈ G ∗ } . Notice that ∆ G ∗ ( t, x ) = Span { ¯ b ( t, x ) , L ( t, x ) : L ∈ L} , where the Lie algebra L has been introduced in (9).We say that two points ( t, x ) and ( t ∗ , x ∗ ) in TT × E belong to the same orbit of G if and only if thereexists a curve γ defined on some interval [ a, b ] and a suitable partition a = t < t < . . . < t r = b suchthat γ ( a ) = ( t, x ) , γ ( b ) = ( t ∗ , x ∗ ) and such that on each ] t i − , t i [ there exists a constant c i ∈ IR witheither(26) ˙ γ ( t ) = ¯ b ( γ ( t )) + c i ¯ σ ( γ ( t )) or ˙ γ ( t ) = − ¯ b ( γ ( t )) + c i ¯ σ ( γ ( t )) . Since the coefficients of ¯ b and ¯ σ are analytic, by Nagano 1966, see also Sussmann 1973, Theorem 8.1 andSection 9, we know that for any G − orbit S ⊂ TT × E , the following holds. For all ( t, x ) , ( t ∗ , x ∗ ) ∈ S, we have that(27) dim ∆ G ∗ ( t, x ) = dim ∆ G ∗ ( t ∗ , x ∗ ) . In particular, this implies the following.Suppose that (0 , x ) and ( t ∗ , x ∗ ) belong to the same G − orbit S such that dim ∆ G ∗ ( t ∗ , x ∗ ) = 6 . This isequivalent to dim ∆ L ∗ N ( t ∗ , x ∗ ) = 5 for some N ≥ , where we recall the definition of ∆ L ∗ N in Definition1. Then we have also for the starting point (0 , x ) the full dimension dim ∆ G ∗ (0 , x ) = 6 or equivalently dim ∆ L ∗ N (0 , x ) = 5 for some N ≥ . We are now ready to give the proof of (12).
Proof of (12).
In the following, we will work with piecewise constant control functions which we call‘admissible control’. Our proof relies on the fact that the support of P ,x is the closure of all paths X ( h ) , as defined in (21), where h is an admissible control.We prove the following fact. If X t / ∈ I , then X t + s / ∈ I for all s ≥ X t = x, we can assume without loss of generality that t = 0 and X = x / ∈ I . Thus,all deterministic control paths X ( h ) issued from x and using an admissible control are such that the22urve ( i T ( t ) , X ( h ) t ) belongs to an orbit of G on which the dimension of ∆ G ∗ is strictly less than 6 . This implies that for any fixed
N, dim ∆ L ∗ N < N, thisimplies that X s / ∈ I for all s ≥ , P x − almost surely. This concludes our proof. (cid:3) We still work with the space-time process ¯ X t = ( i T ( t ) , X t ) , t ≥ , taking values in TT × E . The process¯ X t has the transition operator ¯ P t given by¯ P t (( s, x ) , · ) = δ i T ( t + s ) ⊗ P s,s + t ( x, · ) . We shall denote invariant probability measures of ¯ X t by ¯ µ. In order to prove Part c) of Theorem 3, we use control sets as in Arnold and Kliemann 1987, tocharacterize the support of extremal invariant probability measures ¯ µ of ¯ X t . For that sake, for any( s, x ) ∈ TT × E and any t > , we put O + ( t, ( s, x )) = { γ ( t ) : there exists an admissible control h such that γ ( s ) = x + Z s [¯ b ( u, γ ( u )) + ¯ σ ( γ ( u )) ˙ h ( u )] du for all s ≤ t } . Notice that in the above definition we are moving through the orbit forward in time. In other words, O + ( t, ( s, x )) is the set of all points reachable from ( s, x ) forward in time during a time period of length t. We will also note O + ( s, x ) = [ t> O + ( t, ( s, x )) , the set of all points reachable forward in time, starting from ( s, x ) . Then a set F ⊂ TT × E is calledan invariant control set if O + (( s, x )) = ¯ F for all ( s, x ) ∈ F. Notice that invariant control sets are necessarily disjoint. By Proposition 1.1 of Arnold and Kliemann1987, to any extremal invariant probability measure ¯ µ is associated a unique invariant control set F such that supp ¯ µ = ¯ F .
In the following we start by describing the relationship between extremal invariant probability mea-sures ¯ µ of the space-time process ¯ X t and extremal invariant probability measures µ of the skeletonchain ( X kT ) k ≥ . Then we prove that there are only finitely many invariant control sets.
Proposition 5
The following assertions are equivalent. . µ is an invariant probability measure of the skeleton chain ( X kT ) k ≥ .
2. For any s ∈ ]0 , T [ , the measure µ s := µP ,s is an invariant probability measure of ( X kT + s ) k ≥ , and µ s P s,s + t = µ i T ( s + t ) .
3. The measure (28) ¯ µ := 1 T Z T ds ( δ s ⊗ µ s ) is an invariant probability measure of ¯ X. Proof of Proposition 5.
It is straightforward to show the equivalence of the first two points 1. and2. Formula (28) then shows how to build invariant measures for the process ¯ X starting from invariantmeasures of the skeleton chain.We have to show that any invariant measure ¯ µ for ¯ X can be written in a form (28). The first marginal¯ µ | t is necessarily the uniform measure on the torus TT.
Then by Lebesgue disintegration we have¯ µ = T R T ds ( δ s ⊗ K ( s, · )) , where K ( s, dx ) is a regular version of the conditional distribution of thesecond component of ¯ µ given the first component. By invariance, ¯ µ ¯ P h = ¯ µ for all h >
0. Take ¯ µ asstarting law for ¯ X . If value s has been selected for the first component, then ˜ µ := K ( s, · ) acts asstarting law for the second component. The construction of ¯ X and the explicit form of the transition¯ P h yield K ( s, · ) P s,s + h = L ˜ µ ( X h ). Under the same starting condition, the first component of ¯ X h is i T ( s + h ), thus the second component of ¯ X h has law K ( i T ( s + h ) , · ) = L ˜ µ ( X h ).Note that for every law e µ on E and every f ∈ C b ( E ), h → E ˜ µ ( f ( X h )) is continuous, by continuityof the sample paths of X . In particular, taking e µ = K ( s, · ) as above and h = T − s + t , 0 ≤ t ≤ T, we obtain that t → R E f ( x ) K ( t, dx ) is continuous. This implies that we can take a version of K ( · , · )such that t → K ( t, · ) is continuous.This avoids problems related to λ ( ds ) − null sets in the conditional expectations. Thus we have provedthat K ( s, · ) P s,s + h = K ( i T ( s + h ) , · ) for all s ∈ TT and all h > TT being the torus, invariance ¯ µ ¯ P T = ¯ µ now gives K ( s, · ) = K ( s, · ) P s,s + T for all s in TT . Thus µ s := K ( s, · ) is an invariant measure for ( X kT + s ) k ≥ , and with µ := K (0 , · ) we have 1. and 2. (cid:3) The following proposition follows easily from the above considerations.
Proposition 6 ¯ µ = T R T ds ( δ s ⊗ µ s ) is an extremal invariant measure of ¯ X if and only if any µ s isan extremal invariant measure of ( X kT + s ) k ≥ . roof of Proposition 6. Suppose that µ s = µP ,s is extremal for ( X kT + s ) k ≥ . We have to showthat ¯ µ = T R T ds ( δ s ⊗ µ s ) is extremal for ¯ X. Suppose that there exists α ∈ ]0 ,
1[ such that¯ µ = α ¯ µ + (1 − α )¯ µ with ¯ µ = ¯ µ invariant measures of ¯ X. Lebesgue disintegration yields¯ µ i = 1 T Z T ds ( δ s ⊗ µ is ) , i = 1 , , and therefore ¯ µ = 1 T Z T ds ( δ s ⊗ ( αµ s + (1 − α ) µ s ) = 1 T Z T ds ( δ s ⊗ µ s ) . This implies that for all s ∈ TT, µ s = αµ s + (1 − α ) µ s , where µ s and µ s are invariant measures of theskeleton chain ( X kT + s ) k ≥ . Since µ s is extremal, it follows from this that µ s = µ s = µ s , for all s ∈ TT, which implies that ¯ µ = ¯ µ = ¯ µ , which is a contradiction. On the other hand it is straightforward toshow that ¯ µ extremal implies that any µ s is an extremal invariant measure. (cid:3) We are now able to give the proof of Theorem 3 c). We have already shown that the skeleton chainpossesses invariant probability measures. We now show that the skeleton chain possesses only a finitenumber of extremal invariant probability measures. Let µ be such an extremal measure and let ¯ µ be the associated extremal invariant measure of ¯ X. Then there exists an invariant control set F such supp ¯ µ = ¯ F .
Fix a starting point (0 , x ) ∈ F and consider the process issued from (0 , x ) . Then theskeleton chain ( X kT ) k ≥ starting from x at time 0 induces a subset { (0 , X kT ) , k ≥ } ⊂ { ¯ X t : t ≥ } ;it is this subset which is in the center of our interest.With the notation of Proposition 5, let A k denote the set of all paths which visit B ε k ( x k ) infinitelyoften, A ′ k the set of all paths which visit B ε k ( y k ) infinitely often. Since K is visited i.o. almost surely,there exists an index k ∈ { , . . . , N } , such that P x ( A k ) > . Nummelin splitting then shows that P x ( A ′ k ) > . This means that B ε k ( y k ) belongs entirely to the support of P k ≥ e − k P ,kT ( x, · ); i.e.,(29) B ε k ( y k ) ⊂ supp X k ≥ e − k P ,kT ( x, · ) . But by the support theorem, supp X k ≥ e − k P ,kT ( x, · ) = [ k ≥ Π ( O + ( kT, (0 , x ))) , where Π denotes the projection on the space variable. Thus, using (29) and the fact that F is aninvariant control set, { } × B ε k ( y k ) ⊂ [ k ≥ O + ( kT, (0 , x )) ⊂ O + ((0 , x )) = ¯ F . F which is the support of an extremal invariant measure ¯ µ is suchthat its closure contains (at least) one of the finitely many balls { } × B ε k ( y k ) . Since invariant controlsets are pairwise disjoint, there are no two control sets that can contain the same ball { } × B ε k ( y k )at the same time. Thus there exist only finitely many such invariant control sets, that is, only finitelymany extremal invariant probability measures ¯ µ of ¯ X, hence by Lebesgue disintegration, only finitelymany extremal invariant probability measures µ of the skeleton chain. This concludes our proof. (cid:3) Acknowledgments
We thank Michel Bena¨ım very warmly for stimulating discussions on control arguments. We alsothank two anonymous referees for helpful comments and suggestions.
References [1] K. Aihara, G. Matsumoto, and Y. Ikegaya. Periodic and non-periodic responses of a periodicallyforced hodgkin-huxley oscillator.
J. Theoret. Biol. , 109:249–269, 1984.[2] Ludwig Arnold and Wolfgang Kliemann. On unique ergodicity for degenerate diffusions.
Stochas-tics , 21:41–61, 1987.[3] G´erard Ben Arous, Mihai Gradinaru, and Michel Ledoux. H¨older norms and the support theoremfor diffusions.
Ann. Inst. Henri Poincar´e, Probab. Stat. , 30:415–436, 1994.[4] Nils Berglund and Barbara Gentz.
Stochastic dynamic bifurcations and excitability. In: Laing,Carlo (ed.) and Lord, Gabriel J. (ed.), Stochastic methods in neuroscience.
Oxford: OxfordUniversity Press., 2010.[5] Nils Berglund and Damien Landon. Mixed-mode oscillations and interspike interval statistics inthe stochastic FitzHugh-Nagumo model.
Nonlinearity , 25(8):2303–2335, 2012.[6] Fritz Colonius and Wolfgang Kliemann. Some aspects of control systems as dynamical systems.
J. Dyn. Differ. Equations , 5(3):469–494, 1993.[7] Mathieu Desroches, John Guckenheimer, Bernd Krauskopf, Christian Kuehn, Hinke M. Osinga,and Martin Wechselberger. Mixed-mode oscillations with multiple time scales.
SIAM Rev. ,54(2):211–288, 2012. 268] Alain Destexhe. Conductance-based integrate and fire models.
Neural Comput. , 9:503–514, 1997.[9] Kevin Endler. Periodicities in the Hodgkin-Huxley model and versions of this model withstochastic input. Master Thesis, Institute of Mathematics, University of Mainz (see underhttp://ubm.opus.hbz-nrw.de/volltexte/2012/3083/ ), 2012.[10] John Guckenheimer and Ricardo A. Oliva. Chaos in the Hodgkin–Huxley model.
SIAM J. Appl.Dyn. Syst. , 1(1):105–114, 2002.[11] A. Hodgkin and A. Huxley. A quantitative description of ion currents and its applications toconduction and excitation in nerve.
J. Physiol. , 117:500–544, 1952.[12] R. H¨opfner, E. L¨ocherbach, and M. Thieullen. Transition densities for strongly degenerate timeinhomogeneous random models. Available on http://arxiv.org/abs/1310.7373, 2013.[13] Reinhard H¨opfner. On a set of data for the membrane potential in a neuron.
Math. Biosci. ,207(2):275–301, 2007.[14] Reinhard H¨opfner and Klaus Brodda. A stochastic model and a functional central limit theoremfor information processing in large systems of neurons.
J. Math. Biol. , 52(4):439–457, 2006.[15] Reinhard H¨opfner and Yury Kutoyants. Estimating discontinuous periodic signals in a timeinhomogeneous diffusion.
Stat. Inference Stoch. Process. , 13(3):193–230, 2010.[16] Nobuyuki Ikeda and Shinzo Watanabe.
Stochastic differential equations and diffusion processes .North-Holland mathematical library. North-Holland Pub. Co. Tokyo, Amsterdam, New York,1989.[17] Eugene M. Izhikevich.
Dynamical systems in neuroscience : the geometry of excitability andbursting . Computational neuroscience. MIT Press, Cambridge, Mass., London, 2007.[18] Ioannis Karatzas and Steven E. Shreve.
Brownian motion and stochastic calculus. 2nd ed.
Grad-uate Texts in Mathematics, 113. New York etc.: Springer-Verlag. xxiii, , 1991.[19] Kiyoshi Kawazu and Shinzo Watanabe. Branching processes with immigration and related limittheorems.
Teor. Veroyatn. Primen. , 16:34–51, 1971.[20] Wolfgang Kliemann. Recurrence and invariant measures for degenerate diffusions.
Ann. Probab. ,15:690–707, 1987. 2721] H. Kunita.
Lectures on stochastic flows and applications. Delivered at the Indian Institute ofScience, Bangalore, under the T.I.F.R.-I.I.Sc. programme in applications of mathematics. Notesby M. K. Ghosh.
Lectures on Mathematics and Physics. Mathematics, 78. Tata Institute ofFundamental Research, Bombay. Berlin etc.: Springer-Verlag. IV, 1986.[22] Hiroshi Kunita.
Stochastic flows and stochastic differential equations.
Cambridge Studies inAdvanced Mathematics. 24. Cambridge: Cambridge University Press. xiv, 1997.[23] S. Kusuoka and D. Stroock. Applications of the Malliavin calculus. II.
J. Fac. Sci., Univ. Tokyo,Sect. I A , 32:1–76, 1985.[24] Sean P. Meyn and R.L. Tweedie. Stability of Markovian processes. I: Criteria for discrete-timechains.
Adv. Appl. Probab. , 24(3):542–574, 1992.[25] A. Millet and M. Sanz-Sol´e.
A simple proof of the support theorem for diffusion processes. In:Az´ema, Jacques (ed.) and Meyer, Paul Andr´e (ed.) and Yor, Marc (ed.), S´eminaire de Probabilit´esXXVIII.
Lecture Notes in Mathematics. 1583. Berlin: Springer-Verlag. vi, 1994.[26] C. Morris and H. Lecar. Voltage oscillations in the barnacle giant muscle fiber.
BiophysicalJournal , 35:193–213, 1981.[27] David Nualart.
The Malliavin calculus and related topics.
Probability and Its Applications. NewYork, NY: Springer-Verlag. xi , 1995.[28] E. Nummelin. A splitting technique for Harris recurrent Markov chains.
Z. Wahrscheinlichkeits-theor. Verw. Geb. , 43:309–318, 1978.[29] Esa Nummelin.
General irreducible Markov chains and non-negative operators.
Cambridge Tractsin Mathematics, 83. Cambridge etc.: Cambridge University Press. XI, 1984.[30] E. V. Pankratova, A. V. Polovinkin, and E. Mosekilde. Resonant activation in a stochastichodgkin-huxley model: Interplay between noise and suprathreshold driving effects.
The EuropeanPhysical Journal B - Condensed Matter and Complex Systems , 45(3):391–397, 2005.[31] John Rinzel and Robert N. Miller. Numerical calculation of stable and unstable periodic solutionsto the Hodgkin-Huxley equations.
Math. Biosci. , 49:27–59, 1980.[32] Jonathan Rubin and Martin Wechselberger. Giant squid-hidden canard: the 3d geometry of thehodgkin-huxley model.
Biological Cybernetics , 97(1):5–32, 2007.2833] Daniel W. Stroock and S.R.S. Varadhan. On the support of diffusion processes with applicationsto the strong maximum principle. Proc. 6th Berkeley Sympos. math. Statist. Probab., Univ.Calif. 1970, 3, 333-359 (1972), 1972.[34] Hector J. Sussmann. Orbits of families of vector fields and integrability of distributions.
Trans.Am. Math. Soc. , 180:171–188, 1973.[35] Y. Yu, W. Wang, J. Wang, and F. Liu. Resonance-enhanced signal detection and transductionin the Hodgkin-Huxley neuronal systems.