Characterising the Non-Equilibrium Dynamics of a Neural Cell
CCharacterising the Non-Equilibrium Dynamics of a Neural Cell
Dalton A R Sakthivadivel ∗ Department of Biomedical Engineering, Stony Brook University, Stony Brook, New York, 11794-5281 (Dated: 19th February 2021)We examine the dynamical evolution of the state of a neurone, with particular care to thenon-equilibrium nature of the forces influencing its movement in state space. We combine non-equilibrium statistical mechanics and dynamical systems theory to characterise the nature of theneural resting state, and its relationship to firing. The stereotypical shape of the action poten-tial arises from this model, as well as bursting dynamics, and the non-equilibrium phase transitionfrom resting to spiking. Geometric properties of the system are discussed, such as the birth andshape of the neural limit cycle, which provide a complementary understanding of these dynamics.This provides a multiscale model of the neural cell, from molecules to spikes, and explains variousphenomena in a unified manner. Some more general notions for damped oscillators, birth-deathprocesses, and stationary non-equilibrium systems are included.
I. INTRODUCTION
The neurone, like many other natural systems, is anopen system in constant exchange with its environment.Due to its small scale and the precise degree of controlwhich a cell exerts over its internal environment, as wellas the complex dynamics exhibited by neural cells in par-ticular, it is an object of interest from a physical pointof view. Neural spikes are both a regular, periodic phe-nomenon, and a highly complex, non-equilibrium pro-cess. As an example of self-organisation, neural firingemerges from complex but quantifiable dynamics, hereinvolving ionic equilibria and membrane selectivity [1].The neurone is surrounded by ions in its extracellularfluid, meaning it is subject to diffusion of these ionsacross its cell membrane, through ion channels. It main-tains a negative resting potential of −
70 mV, which re-quires active transport of positive Na + ions out of thecell by pumps. This results in a persistent concentrationgradient, which is precisely what allows a spike to oc-cur. When a critical voltage is reached, previously closedvoltage-gated ion channels open. The positive Na + ionsflow along this concentration gradient through these nowopen channels, leading to an upwards spike in voltage.Likewise, biological phenomena are often due toequally complex physical processes, such as critical ornon-equilibrium dynamics, making the physics presentof interest to the biologist [2]. In neuroscience this is es-pecially true, as while our understanding of the biologyof the action potential is relatively complete, most math-ematical models that explain these are built phenomen-ologically as non-linear oscillators that lack explanatorypower [3]. As such, certain processes like the presenceof spiking behaviour, the shape of the action potential,or the nature of bursting activity are not explained ina mathematically mechanistic way. This requires toolsfrom complex systems theory or non-equilibrium statist-ical mechanics, which are uniquely meant for studying ∗ [email protected] such systems [4, 5]. These methods are very powerful,but often poorly understood.We will analyse the neurone as both a physical and bio-logical object, explaining the fundamental phenomeno-logy of neural dynamics with fundamental physics. Phaseplanes are useful objects of study as they allow us to ‘geo-metrise’ the dynamics of a system. As such, we begin byconsidering a flow in a state space as an analogy whichwill allow us to describe the system. We can model thestate of the neurone as a position, a change in state asa velocity, and a change in that as a force acting on thesystem. In this way, we will formulate an equation ofmotion for the system based on x , ˙ x , and ¨ x , which con-tain x and y dynamics for a spiking and adaptation vari-able, respectively. These positions in state space dependon quantities associated to real space, which we denoteas r such that we have x ( r , t ). We can now evaluatethe dynamical and statistical behaviour of the system byanalysing its motion in state space. II. MAIN RESULTSA. Non-Equilibrium Resting State Dynamics
We begin by formulating a model of resting stateneural dynamics, and use this to investigate non-equilibrium phenomena like spiking in the neurone.Neural dynamics at rest are primarily characterised by astationary state far from equilibrium. The neurone restsat an unvarying voltage of approximately −
70 mV, des-pite the equilibrium concentration of the ions surround-ing it being higher. This will help draw a picture of thedynamics at hand.As we began with, we will use some basic analogiesfor these quantities; primarily, that the position in statespace should be a membrane voltage, because that is howa neural spike is typically measured. If the position x isthe voltage inside the neurone, it is characterised by theconcentration of particles at a point r compared to an a r X i v : . [ q - b i o . N C ] F e b equilibrium value, so that x ( r , t ) = p ( r , t ) − p eq (1)where p ( r , t ) is the density of particles, defined by theconcentration of Na + at each point r . As such, globally, p ( r , t ) − p eq is a gradient of ion concentrations in realspace, x ( t ) = ∇ [Na + ] t (2)which can vary in time. Clearly, the voltage is propor-tional to the out-of-equilibriumness of the system, imply-ing that voltage is zero when the gradient is zero and thesystem is at equilibrium; this also follows from (1).To derive the force in the phase plane, ¨ x ( r , t ), and thusan equation of motion, we consider a slightly simplifiedmodel of neural ion dynamics. The neurone experiencesdiffusion of ions across its membrane due to an imbal-ance of charge. Therefore, this diffusion is exactly pro-portional to how ‘non-zero’ x is, which we measure bydistance in the phase plane, (cid:107) x (cid:107) . The voltage can bethought of as a potential driving the diffusion into thecell.Diffusion, then, is the spatial change in local particledensities, or movement in the phase plane; in turn, it isa force acting on the phase plane position, against thegradient that defines position. This also suggests thatit is the second order temporal change in concentrationgradient, which is confirmed by noting that the speedof the movement of ions is attenuated by diffusion. To-gether, these facts give us an important dynamical equa-tion, that ¨ x ( r , t ) = − (cid:126)a (cid:107) x ( r , t ) (cid:107) (3)for some parameter a which may vary for x and y in x .Equivalently, we have the relationship¨ x ( r , t ) = − ∂V ( x ( r , t )) ∂ x , where this relationship, along with (2) and (3) imply thatthe function for the concentration of Na + ions is a po-tential for the system in state space. This is confirmedby a global continuity condition, where the number ofparticles travelling into the neurone (a kinetic energy)and the number of particles inside the neurone (the po-tential energy) are, together, conserved. Thus, we havethe beginnings of a dynamical analogy by way of ourdefinition of position.We note above that the movement of ions is the firstorder temporal change in concentration gradient x ( r , t ),which gives us ˙ x ( r , t ), the movement of ions at place r andtime t with speed ¨ x ( r , t ) ∝ x ( r , t ). Now, since the ionicconcentration is a potential energy, transferring poten-tial energy to kinetic energy as we move towards equilib-rium increases velocity. In diffusion, the final velocity isconstantly zero at equilibrium, making it a point of min-imum total energy. Note that the final velocity is zero in this case, unlike in an oscillator where it is constant butnon-zero. This difference will be crucial to constructingthe difference between a non-oscillatory stationary stateand a spike, as it suggest the diffusion at equilibrium is adamped oscillator. This damping cannot be derived fromthe potential, and must be added heuristically, like so:¨ x ( r , t ) = − b ˙ x ( r , t ) − (cid:126)a (cid:107) x ( r , t ) (cid:107) (4)This idea is made more particular by the existence ofa negative concentration gradient, which is given by thefact that there are more positive ions outside the cell thaninside. The neural resting potential, or system equilib-rium µ , is less than the more ‘natural’ resting positionof the system, the true equilibrium x ∗ . Since µ < x ∗ ,ions diffuse into the cell, as diffusion adds energy to thesystem to correct the charge imbalance and attain equi-librium. Thus, the force acts against the direction of thegradient, and contains a small negative parameter withrespect to the position. This parameter could representspatial diffusion rate, or how many molecules pass intothe membrane for a given gradient size. As stated, wewill assume that a is temporally constant, but allow it tovary spatially, e.g., with respect to x or y .Physically, this is a cold system coupled to a high tem-perature bath, where a tendency to equilibrate to x ∗ drives the system. These are energy fluctuations, dueto noisy particle diffusion at short spatial and temporalscales, and a potential energy from the existence of agradient at longer scales. The difference in local andglobal dynamics is reflected in equation (1) as comparedto (2). This suggest a wealth of other information aboutthis system, such as the possibility of defining a Fokker-Planck equation for (1), and using a principled coarse-graining like the Mori-Zwanzig projection to go from thisto Langevin dynamics. We will return to some of thislater.As stated, this diffusive tendency would, ordinarily,also be responsible for stopping the current, as the con-centration of ions will eventually equilibrate. Eventu-ally the gradient in (4) should go to zero and thus everyscale of the dynamics themselves. Additionally, once atequilibrium, these fluctuations would be countered by anequivalent dissipation that maintains equilibrium. Again,this is reflected in the fact that the force in (4) is pro-portional to the current ˙ x , such that ¨ x defines a dampedsystem.However—as we know, the system equilibrium is main-tained at µ < x ∗ . This stationary non-equilibrium pointmeans there is a constant concentration gradient, and so(4) is never zero; and yet, despite this, µ is a station-ary point. Clearly there is another component in theforce on the system. Recalling the membrane dynam-ics introduced earlier, we know that what maintains thisconcentration gradient is the constant action of pumps,which remove diffusing particles. As such, (cid:107) x (cid:107) is identic-ally non-zero. This concentration gradient is what allowsfor firing to occur.The fact that this diffusion is, by construction, con-stant in time, and the dynamics stationary, means thatthere is damping as given in (4); however, it is not as-sociated with diffusion. The loss of kinetic energy as wemove to equilibrium does not occur, because we are outof equilibrium. Instead, we have an artificial damping,due to the activity of pumps. This makes the force dueto the concentration gradient decouple from the move-ment of ions, as both are modulated more directly bypump activity. Now, the force from displacement is in-dependent of the effective ion movement.The effective ion movement, in turn, is a competi-tion between ions entering through channels and beingpumped out. We can define the concentration of ions inthe cell as the following equation:¨ x ( r , t ) = − λ ˙ x ( r , t ) + c ( t ) , (5)which defines the relationship between ions beingpumped out of the cell and ions entering the cell. Thepump activity is negative due to the loss of energy tothe concentration gradient, while the force due to ionentrance is adding energy and thus is positive. the con-centration gradient is negative because the force it exertsalso requires energy loss when above equilibrium, and ina self correcting way, when it is below equilibrium it addsenergy.We absorb the two into an effective ion movement,which requires the pump activity to dominate move-ment. In other words, supposing that ˙ x ∝ c ( t ) , and | λ ˙ x ( r , t ) | > c ( t ), we can define damping as an effect-ive movement due to pumps. Regarding any entrance ofions as a momentary perturbation,¨ x ( r , t ) = − λ ˙ x ( r , t ) , from which we have the following:¨ x ( r , t ) = − λ ˙ x ( r , t ) − (cid:126)a (cid:107) x ( r , t ) (cid:107) . (6)Equation (6) comes from combining our effective move-ment with the force from the concentration gradient, aslaid out in the previous justifications. In the stationarycase, this implies ˙ x ∝ x , a classic fluctuation-dissipationrelation; it also gives us a linear response dynamic.Statistically, this system is a mean reverting systemwhere damping is always in place, and when the systemis excited by external perturbation, eventually it returnsto µ . This is shown in the figure below, the plotted solu-tion to (6) for the one-dimensional case, focussing on thespiking variable. Here, a kind of ‘dissipation kernel’ actson normal mode oscillation in the system to give the ste-reotypical form of the activation function. We plot theforces as defined previously, a component for the concen-tration of ions in the neurone, and for the movement ofthe state of the neurone as it seeks to return to equilib-rium. These components combine additively in the dif-ferential equation and multiplicatively in its solution. Ata perturbation, when lots of ions have flowed in and theconcentration is high, we have a spike. As these ions are pumped out, we have the falling action or repolarisationof the neurone. When the activity of pumps coincideswith the natural oscillatory behaviour of the force of dis-placement, we observe hyperpolarisation. Biologically,this is the pumping of extra Na + ions out of the cell,exacerbating the existing concentration gradient in thenegative direction. Figure 1.
Stereotypical neural firing dynamics arisefrom the dissipation of intracellular ion concentra-tion.
The plotted solution to (6) reproduces the shape ofa neural action potential as a non-sinusoidal oscillation aboutan equilibrium. We show the solution to the single variableequivalent of (6) in black, the dissipative component in red,and the underlying oscillation in blue. The multiplication ofthis dissipation kernel with our underlying motion gives anaction potential.
We also have the suggestion of possible burst patternsin our spiking activity. Typically these are given by par-ticular limit cycle shapes that allow for extended firingdynamics, such as a slow spiking or ion variable mod-ulated by a fast subsystem [6]. Our model is capableof exhibiting similar behaviour as the hysteresis loop in[7], where if the dissipation kernel is deactivated by set-ting it to a maximum in the middle of a repolarisation,the outward current is deactivated and another spike isemitted.We will comment more on spiking in the next section.
B. Transition From Diffusion to Ordered Uptake
The dynamics at rest are only a single element of theneural system, and by far the least interesting. Neuralspiking is perhaps the most famous and complex phe-nomenon in the human body. There is a threshold po-tential present in the neurone, and an all-or-nothing spik-ing rule that determines the transition to a spiking state.Previously, this transition has been described in a sim-plified way, to explain some features in artificial neuralnetworks [8]. We will derive a model to describe this ina more complete way.Spiking entails the neurone experiencing random dif-fusion of ions across the membrane until a critical depol-arisation is reached, called the threshold potential, andthen experiencing ordered, stereotypical dynamics. Thiscould be defined as a new attractor surrounding the fixedpoint of the resting state. The more interesting casewould be, in following with the utilisation of heat andenergetic metaphors in the phase plane, modelling thisphase transition from randomness to order as a Hopf bi-furcation, which depends on some parameter related tothe threshold potential. A limit cycle arises out of thesystem when this critical point is met.To begin, we define the Hopf bifurcation as an elim-ination of damping from a damped ‘normal form’ of thesystem. Suppose that, following ideas from the fluctu-ation dissipation theorem defined previously, all asymp-totic fixed points in systems capable of oscillation areactually highly damped steady-state solutions to an os-cillator with underlying oscillatory dynamics. Then, thedynamics settle into the fixed point according to the fol-lowing dynamical equation:¨ x = − λ ˙ x − b ( x ) . (7)The characteristic polynomial of (7) has roots r = − λ ±√ λ − k , which is used in the solution x = e rt . Infollowing with the definition of the Hopf bifurcation, wewill observe a periodic solution arise when the dampingcoefficient λ , also the real part of the roots, becomes zero.When this happens, it will render the solution x ( t ) = e Re( r ) t · e Im( r ) t = e ±√ ki , as the exponential term responsible for damping becomesone, and only the underlying oscillatory component isleft.We would like to understand, then, how damping isdriven towards zero in our system; that is, what doesdamping vary as a function of. In both phase transitionsand Hopf bifurcations, we explore some parameter of thesystem that reflects the break in symmetry. The para-meter of interest here is of course λ , the rate of diffusionacross the membrane. Here we have a disordered phasewith λ < λ ≥ c ( t ), countering the dampingterm in our dissipative dynamics. This can be thoughtof, in light of the above definition as leading to a limitcycle, as vectors emanating from the limit cycle centreand creating a displaced boundary. This also motivatesus to define the onset of spiking activity as a phase trans-ition, where the lowest energy configuration of the statechanges along with a potential energy change. In bothcases, what truly mediates the order parameter is the function c ( t ), and so both the transition and bifurcationare due to c ( t ).There is a relationship between c ( t ) and the neuralstate – that is, many neurones are voltage gated. Whenthe voltage reaches a critical point due to perturbations,called the threshold potential, c ( t ), which is initially zero,counters the dissipative tendency − λ ˙ x in the force actingon the system. This is the discontinuity that character-ises the phase transition and the critical point at which itoccurs. In line with (6) and Figure 1, when dissipation isless than or equal to c ( t ) , the concentration of ions insidethe cell reaches a maximum, and a spike is created. Even-tually, c ( t ) decreases again, and dissipation dominatesthe ion dynamics once more. This is why despite permit-ting positive real root components, the oscillations arenot unstable. We will make this phase transition moreparticular in the following section, where we use a toymodel to examine exactly what causes it. C. An Ising Model for Neural Spiking
To investigate this phase transition, we use the Isingmodel, a model from statistical mechanics which is par-ticularly adept at describing the transitions betweenphases. We assume the reader is familiar with the Isingmodel, and do not fully introduce it. Overviews are givenin [9] and [10], with particular application to neural dy-namics in [8].The Ising model’s spin is a binarised variable takingon values s ∈ {− , } . Suppose we have n channels and m pumps, lying on a narrow two-dimensional lattice Λof size n × m = N . Suppose also that a opening in themembrane that contains an Na + ion has the value +1 andone that does not contain an ion is −
1. We suppose weare close to criticality, as the resting membrane potentialof −
70 mV is close to the threshold potential −
55 mV;this implies that a small but non-negligble proportionof spins will be s = −
1. Indeed, we have only a smallnumber of channels closed—but enough to destroy orderin the model.We model spin dynamics such that n > m and n = n open + n closed . This implies that when n closed >m − n open , we have dissipation due to the dominanceof pumps, in line with (6) and our previously definedHopf bifurcation. Similarly, if n closed ≤ m − n open , thenumber of open channels is greater than pumps, and wehave a spike; equivalently, most openings are s = +1 andour model approaches ferromagnetism, (cid:104) s (cid:105) → +1. Thetransition rate of this subset of spins is what we will focuson, and in particular, we will look at the feedback loopbetween cell voltage and opening of channels.The low ‘temperature’ of this Ising model suggest thatwe have energy diffusion, which mediates the voltage inthe neurone (following (3) as outlined in section II A).The transition rate of a closed channel at i ∈ Λ N andtime t is related to the effective field around the channel,which itself is given by the proportion of closed and openchannels. This is a local magnetic field, following theGlauber dynamics on the standard Ising model [11]. Weadapt it for a dissipative non-equilibrium case, describedthis below.The transition rate obeys the following dissipative dy-namic, using the local field in Glauber’s Ising model,and a similar non-equilibrium Ising model described in[12, 13]: r ( s i , t ) = e − λ m + (cid:80) zj s j,t (8)where λ m is the amount of outflow due to pump activity,a constant for pump rate times the number of pumps, and z represents the nearest neighbours of the closed spin at i . The relationship suggested by this and λ in (5) indeedholds; the transition rate is sensibly proportional to theamount of damping in the system.An incoming perturbation is a signal carrying somevoltage in an adjacent neurone. This increases the localmagnetic field, thus ‘coaxing’ some channels opened ac-cording to the transition dynamics in (8). Heuristically,if this perturbation is large enough (in particular, largeenough to temporarily overcome the effect of dampingin the transition probability), the local field becomes sostrong as to exceed the dissipation in (8) and create suc-cessive spin flips, which magnetises our Ising model. Dueto the small but non-negligible positive feedback loopbetween the proportion of open channels and the localmagnetic field on any given site i , the spin dynamics arecritically unstable even at rest, and this provides firingin response to critical perturbations.The oscillatory component of the action potential thencomes from the fact that after opening, the transitionrate for the n closed closed spins to close again remainshigh. They do so, and then the dissipation from pumpsoccurs so as to damp the system, as given in (6). In linewith commentary at the end of section II A, if this trans-ition occurs, and then another flip back to open occurswith small probability in the middle of repolarisation,another spike will be emitted; in which case, our modelwill portray bursting activity.Finally, the probabilistic nature of (8) and the previ-ously mentioned positive feedback loop suggest it is pos-sible to observe spontaneous firing, although with smalllikelihood. This is readily justified with experimentalevidence [14, 15] and theoretical justification [16, 17]. D. Non-Equilibrium Spiking Steady State
The phase transition dynamics in previous sectionshappen on extremely short time scales, and in reality,no neurone is quiet for long [18]. We now seek a prin-cipled model of a spiking steady state, which containsthe microscopic insights previously defined over a largerspatial and temporal scale. Using the non-equilibriumprinciples defined previously, we may describe a spikingsteady state in exchange with its environment as peri-odic motion under the influence of various forces. By this periodicity, a limit cycle in a two-dimensional state spacenecessarily exists, where the spiral of states through timeis collapsed into a circular shape of particular dimensions.Because the action potential has a canonical shape, thisregion of the phase space is well defined. We will firstexamine a simplified sinusoidal motion, and then addressa more realistic shape corresponding to a typical actionpotential.As defined in section II B, we have a limit cycle arisefor specific parameterisations of our system. Over longerspatial and temporal scales, we can define this limit cycleas being due to a transformation in the dynamical quant-ities we measure. Much like how at macroscopic scalesthe statistical mechanical quantities we work with be-come dynamical, defined by classical theories, we makeuse of a change in perspective that comes with our changein scale. A direct example can be observed along theboundary of the limit cycle, where the net force equalszero, and thus the potential energy is at a minimum. Inanalysing this force at a macroscopic scale, we will onlyconsider an effective maintenance of concentration gradi-ent, that is, the net opposition to loss of energy in thesystem.Noting, then, that our limit cycle has an attractionfrom an initial fixed point (the resting potential) to theboundary, where the potential energy is lower due to up-take dynamics inside the limit cycle, we can define a po-tential energy V ( x ) due to ion uptake through channels: V ( x ) = − (cid:90) C F up · d γ . Here, we use the convention of positive forces being dir-ected away from the limit cycle centre.Since the path taken, γ , is totally determined by theforces present, this line integral reduces to: V ( x ) = − (cid:90) x f x F up · d x , with x being contained inside the limit cycle and x f lying on the boundary of the limit cycle.Keeping the metaphor, we may define the kinetic en-ergy T of a unit mass flowing along a trajectory in thestate space as v . We may formulate a Lagrangian forthis system in terms of T and V ( x ), L = T − V ( x ) = 12 ˙ x + (cid:90) x f x F up · d x . We also note that there is the previously highlighteddissipation due to pump activity, and utilise the non-variational Euler-Lagrange equation, accounting for dis-sipation Q , dd t ∂ L ∂ ˙ q − ∂ L ∂q = Q, which yields ¨ x − F up = Q. Since we know the form of this dissipation as it is givenin the limit cycle, we may relate this to Q and recover acomplete equation of motion for the system:¨ x = F up − F diss . (9)From the perspective of an initial point in the limit cycle,this dissipative component of the total force acting on thesystem has the natural effect of defining the boundaryand restricting the final position. This also follows fromthe discussion of the high likelihood of ‘re-transition’ in(8), at the end of section II C.We may also take the perspective of an initial pointoutside of the limit cycle; although less biologically plaus-ible, we will note that the end result is the same. Thedissipative forces present will drive the system towardsan equilibrium point, a point of minimum potential en-ergy, where there is no force acting on the system. Thiscreates an attraction towards the limit cycle centre, andthus serves the role of a restoring force. The total force¨ x is more complicated than only this, however—there re-mains a maintenance of ion gradients, and thus current,by the flow through channels, which resist this dissipa-tion.The potential energy of the attractor is thus the integ-ral of this restoring force. However, we must also accountfor the displacement of the equilibrium point by the mod-ulation of this restoring force. We may simply add theforces inside the limit cycle such that the potential energysatisfies that it must be a minimum along the boundaryof the limit cycle, and that the other forces present areaccounted for: − (cid:90) C ( − F diss + F up ) · d s . We again define the kinetic energy T of as v , and withthe potential energy V , we apply the standard Euler-Lagrange equation to find:¨ x + F diss − F up = 0or ¨ x = F up − F diss , recovering our equation of motion, (9).Note a duality present, in which the final positiondefined by this minimum is now determined by the up-take force. In this case, it would be possible to definean F net which absorbs the uptake force into the dissip-ative force, ‘shortening’ the vector field to the boundaryof the limit cycle and automatically restricting the finalposition of the system to a point along the limit cycleboundary. In particular, this is the point closest to theinitial position, given by a least action principle. The rolethat uptake force plays in defining the limit cycle fromthe interior is consistent with that which is suggested bysection II B.We will note one thing further about this system,namely, that current, as the flow of ions, increases with increasing imbalance—uptake or loss—of ions. Corres-pondingly, velocity, as the time integral of force, increaseswith time. Since we have projected the helical periodicmotion through time onto the phase plane, here, move-ment through a physical distance becomes synonymouswith movement in time. Thus, as a trajectory approachesthe limit cycle boundary, the velocity increases in propor-tion to the increase in attraction, or decrease in potentialenergy, and the system is approximately conservative.This is true regardless of the magnitude of the vectors intime, however, we may be confident that as the systemequilibriates, diffusion stops, so the force vectors decreasein magnitude as they approach the limit cycle. Nonethe-less, the integral of these forces continues to increase, asthey remain in the same direction in space. We notice,in line with the above observation, that this equation isHamiltonian—that is, satisfies autonomy. This reveals anumber of interesting things about the dynamics of thesystem, which we have used independently throughoutthe paper, including phase space volume preservation andthe equal response of the neurone to dissipation. Thus,we have an understanding of both the consistently peri-odic and non-equilibrium nature of a neurone.As stated, we know the form of (9) on the microscopic,short-time scale. We can transform this to a suitablecoarser scale as follows. Using (6) and the discussionin II C, the spiking steady state condition implies that λ >
0, giving ¨ x = (cid:126)θ ˙ x − (cid:126)a (cid:107) x (cid:107) for some new effective parameter, (cid:126)θ . We also use thehomogeneity of r at large spatial scales to look only atthe phase plane dynamics, and therefore, decouple thephase plane variables from the probability distributionacross r .We separate this system into a spiking variable x andan adaptation variable y ,¨ x = a ˙ x − b (cid:107) x (cid:107) ¨ y = c ˙ y − d (cid:107) x (cid:107) where we have rewritten θ x = a , θ y = c , a x = b , and a y = d . Now, we integrate with respect to time:˙ x = ax − b (cid:90) s (cid:107) x (cid:107) d t ˙ y = cy − d (cid:90) s (cid:107) x (cid:107) d t (10)with s a time variable. This yields a system with the formof a PID controller, which controls motion by both posi-tion and the integral of position, confirming our notion ofthe maintenance of a stationary, out-of-equilibrium point.We may argue that the time integral of phase planedistance is area xy , due to the following rewriting of thesystem: d x = f ( x, y ) d t which implies (cid:90) s (cid:107) x (cid:107) d t = (cid:90) s (cid:107) x (cid:107) d xf ( x, y ) . If we assume f ( x, y ) is approximately constant overinfinitesimally small time periods d t , we have, (cid:90) s (cid:107) x (cid:107) d t = 1 f ( x, y ) (cid:90) s (cid:107) x (cid:107) d x . Performing this integral, we obtain the integral of theboundary of a curve (the distance from zero at each time)along the curve, which is the area, (cid:90) s (cid:107) x (cid:107) d t = α (cid:90) s (cid:107) x (cid:107) d x = αxy ( s ) . (11)Then, from (10) and (11), and arguing that adapta-tion varies in opposition to spiking due to the shared re-sources imposed by a local continuity condition, we have the Lotka Volterra equations˙ x = ax − b (cid:48) xy ˙ y = − cy + d (cid:48) xy. In fact, the solution to the Lotka-Volterra system ex-actly plots a stereotypical action potential for a partic-ular parameter set, shown in Figure 2. Together, theseimply that the Kolmogorov process underlying the Lotka-Volterra equations are comparable to neural dynamics,which is quite reasonable, since they share the same non-equilibrium diffusion based characteristics. It has alsobeen shown that the Lotka-Volterra equations have anassociated Hamiltonian; further, since they model theinteractions between predator and prey, we are inspiredto refer to the dynamics of our own system as the in-terplay between a spike and an adaptation variable; weinterpret the spike as using resources, and the adapta-tion variable, which lengthens the inter-spike interval, asallowing replenishment of these resources.
Figure 2.
Stereotypical neural firing dynamics given by the numerical solution of the Lotka-Volterra equations.
Plotting a temporally discretised, numerical solution to the Lotka-Volterra solution for initial x, y = 10, integration constant C = − b (cid:48) = d (cid:48) = 0 . a = 0 .
6, and c = 0 .
45. Obvious features of the neural spike are present, such as oscillation aboutan equilibrium point of µ = −
70, shallow hyperpolarisation and large amplitude spikes, short ( <
15 ms) inter-spike interval, aconsistent point at which a spike begins rising (threshold potential), and a fairly regular or stereotypical shape. These featuresof the equation match precisely the features known from biological data. The physical or biological interpretation of theseparameter values is not yet clear.
III. CONCLUSION
We have combined dynamical and statistical insightsto understand how neural spiking arises as a non-equilibrium phenomenon, and derived a model to capturethese qualities. The model, from molecules to spikes, usesvarious dynamical quantities and their statistical mech- anical underpinnings to study the evolution of neuralmembrane voltage. We evaluated the essential role thatthe non-equilibrium nature of complementary diffusionand dissipation play, at once as simple as maintaining anon-equilibrium stationary state and as complex as al-lowing spiking behaviour. Due to its explanatory powerand insightful results, this model is a valuable descriptionof neural cell dynamics. [1] A L Hodgkin and A F Huxley. A quantitative descrip-tion of membrane current and its application to conduc-tion and excitation in nerve.
The Journal of Physiology ,117(4):500–544, 1952.[2] Thierry Mora and William Bialek. Are biological sys-tems poised at criticality?
Journal of Statistical Physics ,144(2):268–302, 2011.[3] Mara Almog and Alon Korngreen. Is realistic neur-onal modeling realistic?
Journal of Neurophysiology ,116(5):2180–2209, 2016.[4] Yuri I Wolf, Mikhail I Katsnelson, and Eugene VKoonin. Physical foundations of biological complex-ity.
Proceedings of the National Academy of Sciences ,115(37):E8678–E8687, 2018.[5] William Bialek.
Biophysics: searching for principles .Princeton University Press, 2012.[6] Eugene M Izhikevich. Neural excitability, spiking andbursting.
International journal of bifurcation and chaos ,10(06):1171–1266, 2000.[7] Eugene M Izhikevich.
Dynamical Systems in Neuros-cience: The Geometry of Excitability and Bursting . TheMIT Press, 2006.[8] Dalton A R Sakthivadivel. Formalising the use of theactivation function in neural inference. arXiv e-prints ,page arXiv:2102.04896, February 2021.[9] Dalton A R Sakthivadivel. A pedagogical discussion ofmagnetisation in the mean field ising model. arXiv e-prints , page arXiv:2102.00960, January 2021.[10] Stephen Brush. History of the Lenz-Ising model.
Reviewsof Modern Physics , 39(4):883–893, 1967.[11] Roy J Glauber. Time-dependent statistics of the Isingmodel.
Journal of Mathematical Physics , 4(2):294–307, 1963.[12] Luisa Andreis and Daniele Tovazzi. Coexistence of stablelimit cycles in a generalized Curie–Weiss model with dis-sipation.
Journal of Statistical Physcs , 173(06):163–181,2018.[13] Raphael Cerf, Paolo Dai Pra, Marco Formentin, andDaniele Tovazzi. Rhythmic behavior of an Ising modelwith dissipation at low temperature. arXiv e-prints , pagearXiv:2002.08244, February 2020.[14] Michael H¨ausser, Indira M Raman, Thomas Otis, Spen-cer L Smith, Alexandra Nelson, Sascha du Lac, YonatanLoewenstein, S´everine Mahon, Cyriel Pennartz, Ivan Co-hen, and Yosef Yarom. The beat goes on: Spontaneousfiring in mammalian neuronal microcircuits.
Journal ofNeuroscience , 24(42):9215–9219, 2004.[15] Indira M Raman, Amy E Gustafson, and Daniel Pad-gett. Ionic currents and spontaneous firing in neuronsisolated from the cerebellar nuclei.
Journal of Neuros-cience , 20(24):9004–9016, 2000.[16] Bing Jia, Huaguang Gu, Li Li, and Xiaoyan Zhao.Dynamics of period-doubling bifurcation to chaos inthe spontaneous neural firing patterns.
CognitiveNeurodynamics , 6(1):89–106, 2012.[17] Alberto Mazzoni, Fr´ed´eric D Broccard, Elizabeth Garcia-Perez, Paolo Bonifazi, Maria Elisabetta Ruaro, and Vin-cent Torre. On the dynamics of the spontaneous activityin neuronal networks.
PLOS ONE , 2(5):1–12, 05 2007.[18] Alberto Mazzoni, Fr´ed´eric D Broccard, Elizabeth Garcia-Perez, Paolo Bonifazi, Maria Elisabetta Ruaro, and Vin-cent Torre. On the distribution of firing rates in net-works of cortical neurons.