A novel wave decomposition for oscillatory signals
Cristina Rueda, Alejandro Rodríguez-Collado, Yolanda Larriba
AA novel wave decomposition for oscillatory signals
Cristina Rueda, Alejandro Rodr´ıguez-Collado, Yolanda Larriba
Department of Statistics and Operations Research, Universidad de Valladolid, Val-ladolid, Spain.
E-mail: [email protected]
Abstract . Oscillatory systems arise in the different science fields. Complex math-ematical formulations with differential equations have been proposed to model thedynamics of these systems. While they have the advantage of having a direct phys-iological meaning, they are not useful in practice as a result of the parameter ad-justment complexity and the presence of noise. In this paper, a novel decompositionapproach in AM-FM components, competing with Fourier and other decompositionsis presented. Several interesting theoretical properties are derived including the Or-dinary Differential Equations describing the signal. Furthermore, the usefulness inreal practice is demonstrate to analyse signals associated to neuron synapses andby addressing other questions in Neuroscience.
Keywords : Oscillatory Signal, Frequency Modulation, Non-linear Models,
F M M model.
1. Introduction
A system in which a particle or set of particles moves returning to its initial stateafter a certain period is an oscillatory system and an oscillation is the repetitivevariation of a signal, or a measure, associated with the system. Oscillations occur inphysical and biological systems but also in human society. Examples of oscillationsinclude the swinging pendulum, the periodic firing of a nerve, the expression ofcircadian clock genes, the beating of the human heart, signals in the radio frequency,or business cycles in economics. The periodic motion, characteristic of oscillations,is encountered in all areas of science, and a huge number of investigators, fromdifferent disciplines have contributed to the advancement of the field using particularperspectives, as they are aware of diverse real problems specific to their areas.The terminology, the fundamental concepts, the principles, the conventions, themethods, and the theory of these perspectives are often quite different.On the one hand, there is the focus of the signal analyst which emphasizes thetime-frequency approach and the development of algorithms to process and analyseobservable signals; most researchers in the communication field follow this approach;a recommended reference is the book Boashash (2016). Alternatively, from a morephysical focus, a dynamic system is described primarily by a set of differentialequations. Basic references are, Wigren (2015), Ashwin et al. (2016), Pikovsky and a r X i v : . [ q - b i o . N C ] J u l Cristina Rueda, Alejandro Rodr´ıguez-Collado, Yolanda Larriba
Rosenblum (2015). This approach has been the preferred one by researchers inelectrophysiology and Neuroscience.The statistical perspective is a third approach to the subject, suitable when realand noisy signals are observed. It considers models that assume noise terms andis useful to identify the real signal features. This approach has been preferablyadopted in Chronobiology as seen in Larriba et al. (2019).Extracting features from an observed signal is the first step towards data analysisand an efficient algorithm to extract the desired features from the recorded signalis also needed. In the case of oscillatory signals, the main features are the numberof oscillatory components and the amplitude or peak time of each oscillation. Forinstances, for physiological signals, it is well known that signal oscillations containplenty of information about a person’s health condition. In general, inferring thedynamical information from a time series is challenging. Fourier Decomposition(FD) is a traditional approach to the analysis of such signals; however, if the signalis not composed of harmonic functions, then the approach is not useful to extractthe features. For example, respiratory flow signals do not usually oscillate like asinusoidal function, since inspiration is usually shorter than expiration, and thisdifference is intrinsic to the respiratory system.Several decomposition approaches have been considered in the literature, FD isjust one of them. Among the recent AM-FM decomposition proposals are Lin et al.(2018) and Sandoval and De Leon (2018). Kowalski et al. (2018) gives a usefulreview of methods and revises several requirements a time series analysis methodfor an oscillatory signal should satisfy.In this paper, a novel decomposition of the signal on AM-FM parametrizedcomponents, where the AM is always constant, and FM is modelled using a M¨obiustransformation, is presented. In particular, the individual components are
F M M signals, as described in Rueda et al. (2019), while the multicomponent signal oforder m is denoted as F M M m . A fascinating application, a germ of this work, isRueda et al. (2020) where a physically meaningful wave decomposition for ECGsignals is given.There are significant advantages of the F M M decomposition approach as againstits competitors for modeling periodic or quasi-periodic signals. First, the simpleparametric formulation that, in particular, enables rigorous and parametrized defi-nitions for basic elements. Second, the interpretability of the parameters and theirflexibility to describe and differentiate a variety of wave patterns. And third, theaccuracy of the estimators and the robustness against noise.While this paper addresses a broad audience of data analysts, it is particularlyaimed at the mathematicians and statisticians who will most value the strengths ofthe model from a theoretical point of view, in addition to the applied one. Manyproperties of the model are rigorously described. In particular, the Analytic Signal(AS) associated to the
F M M m model and other essential elements are derived. Inaddition, a Dominant Phase definition is presented which has interesting proper- novel wave decomposition for oscillatory signals ties. Moreover, the F M M m signal is characterized as the solution to a system ofdifferential equations; while, on the statistical side, an estimation algorithm is givenand such properties as consistency and accuracy are shown. Regarding applica-tions, here we deal with problems in Neuroscience, which are also interesting forresearchers in electrophysiology and biology. Specifically, we deal with Action Po-tential curves (AP) that measure the fluctuation of the potential of a neuron; thatis, the difference between the electrical potential inside and outside the cell due toan external stimulus. The AP, which describes the system for about a milisecond,starts from a resting potential of aproximately -70mV and has several stages. AtStage 1 (Depolarization) the voltage rises, at Stage 2 (Repolarization) the voltagefalls, and at Stage 3 (Hyperpolarization) the negative voltage returns to the restingpotential level. If the depolarization is large enough, the cell spontaneously spikesand then goes to a refractory period, during which the cell cannot spike. The typicalshape of an AP with a single spike is shown in Figure 1. For a short introduction seeRaghavan et al. (2019). For researchers in the subject, it is of critical importance Figure 1.
A typical Action Potential curve and its phases. The signal has been generatedusing an
F M M model. to extract features of the spike generation of individual neurons, among them, thelocation and shape of the spike. These characteristics are important to determinethe cell types and their functions and to help us to understand the physiologicalprocess, see Mensi et al. (2012) or Trainito et al. (2019), among others. It can besaid that Mathematical modeling and analysis of waveform sequences have been oneof the central problems in the field of computational Neuroscience.The description of a dynamic process by Ordinary Differential Equations (ODE)is a traditional approach of analysis. Many models have been described in the lit- Cristina Rueda, Alejandro Rodr´ıguez-Collado, Yolanda Larriba erature (see Teeter et al. (2018) and references therein), the Hodgkin and Huxleymodel, firstly presented in Hodgkin and Huxley (1952), is the most widely stud-ied as it has served successfully to study the bio-electrical activity from differentorganisms. The family of leaky-integrate-and-fire (LIF) models have also recentlybecome very popular in the literature (Teeter et al. (2018), Gerstner et al. (2014)).While these models have biological, physical and chemical foundation, they requirea precise measurement of the studied neuron to adjust the parameters and are usefulonly in controlled experiments. Besides, neuronal activity is often noisy and nonstationary across time, which makes the problem of extracting features significantlychallenging. Flexible and simple approaches such as the
F M M that account forthe noise and parametrically describe the oscillatory signals, are suitable to tacklethe problem.In this paper, the potential of the
F M M approach to model AP signals is il-lustrated using real signals from the Allen Cell Types Database (ACTD) ( http://celltypes.brain.map.org ). This database is freely available and has been thereference data for many authors (Teeter et al. (2018) and references there in). The
F M M m model is compared with two widely used approaches as are the FD andthe Spline model. The F M M m outperforms both of them. Furthermore, acrossthe paper, several properties of the F M M model will be included to constructionand analysis of
Phase Response Curves (PRC). PRC describes the variation ofquantities (often the phase) of the system in response to perturbations or stimulus.The rest of the paper is as follows; Section 2 revises some basic elements ofoscillatory systems and Section 3 presents the
F M M model and the mathematicaland statistical properties. In Section 4, the contributions of the methodology tothe study of AP and PRC curves in Neurosciences are explained and the resultsfrom numerical studies with real AP data are shown. Finally, a brief discussion isincluded in Section 5 and the proofs of theoretical results in the appendix.
2. Basic elements in oscillatory systems.
Different types of variables or signals are defined in periodic oscillatory systems,those applying directly to the motion and usually observable, as the membranevoltage, and those describing the periodic nature of the motion: amplitude, period,and phase, which are not observable directly. The period is the time taken for anoscillating system to return to its initial position, which we assume is known andfixed. The period of the oscillation is normalized here to be 2 π . On the other hand,the phase it is the most elusive of these quantities but a fundamental one, as isthe key to describing variations among signals. Hence having a proper definition ofphase is essential.Moreover, it is generally accepted that for a given oscillatory phenomenon, thereexists an underlying complex-valued signal: S ( t ) = µ ( t )+ iν ( t ), t ∈ [0 , π ]. However,there is no unique way to derive the complex signal when only the real signal isknown. The AS approach, the most extended in the literature, is briefly presented novel wave decomposition for oscillatory signals below, along with other elements as the phase space and the periodic orbit . For many authors, the FD is one of the most important mathematical tools in signalanalysis. The FD is the representation of a real signal as a sum of components, asfollows: µ ( t ) = a + ∞ (cid:88) k =1 a k cos( kω t ) + b k sin( kω t )Besides, the Hilbert Transform (HT) is considered one of the most critical operatorsin mathematical analysis that we define below to facilitate the reading. Definition 1.
HT on the real line: Let f ∈ L p ( R ) , ≤ p < ∞ , the HT of f onthe real line is defined by, HT ( f ( t )) = p.v. π (cid:90) ∞−∞ t − x f ( x ) dx, where p.v. denotes the principal value singular integral. Finally, the AS associated to a real signal is defined as follows,
Definition 2.
Analytic Signal representation of µ ( t ) . S ( t ) = µ ( t ) + iν ( t ) , where ν ( t ) = HT ( µ ( t )) . The AS was first defined by Gabor (1946) as that complex signal, underlyingan observed real signal, constructed with the HT. AS has interesting propertiesand researchers often assume that the underlying complex signal associated to anoscillatory process is an AS, which simplifies the analysis (see Picinbono (1997),Sandoval and De Leon (2015)).Given a complex signal, S ( t ) = µ ( t ) + iν ( t ), it can be expressed as: A ( t ) e iφ ( t ) where, φ ( t ) = 2 arctan (cid:18) ν ( t ) µ ( t ) (cid:19) ; A ( t ) = (cid:112) µ ( t ) + ν ( t ) . (1) A ( t ) and φ ( t ) are called the signal’s Instantaneous Amplitude (IA) and
Instanta-neous Phase (IP), respectively. The derivative of φ ( t ) is known as InstantaneousFrequency (IF),which is expected to be positive in applications, as argued for in-stance in Sandoval and De Leon (2015). Hence, the AS is not always interpretablein a way which is meaningful and representative of physical phenomena. In partic-ular for multicomponent signals (Boashash (2016)). Nevertheless, the signal couldbe modeled as a weighted sum of component signals, as in Sandoval and De Leon(2018), in which case the problem is that the decomposition is not unique. In orderto get interpretable results, the role of each of the components should be identified.This question is dealt with later in the paper.
Cristina Rueda, Alejandro Rodr´ıguez-Collado, Yolanda Larriba
There are multiple definitions of phase in the literature that may lead to contradic-tory results. A simple property, remarked by Winfree (2001), is that every point onthe oscillation can be uniquely described by a phase. Hence, for many authors, thephase has a natural definition as the time along the cycle (Winfree (2001), amongothers). Besides, for signals such as µ ( t ) = A cos( φ ( t )) where φ ( t ) is a periodicfunction, φ ( t ) is also an interesting phase definition. A popular approach, adoptedby some authors such as Deng et al. (2016), Oprisan (2017), and Caranica et al.(2019), is to use the IP associated with the AS approach, defined in (1).The ambiguity on phase definition is well explained in Osipov et al. (2003),Chavez et al. (2006), and Freitas et al. (2018) where other alternatives are alsoprovided.The phase definition, to be useful in practice, should be also applied to theunderlying complex signal, and reciprocally, the representation on a complex planeis essential to derive a proper phase definition. Hence, the concept of phase space ,a space in which all possible states of a system are represented, with each possiblestate corresponding to one unique point, is also crucial in dynamic systems.The degrees of freedom or dimensionality of a dynamic system is the number ofvariables governing the state of the system at time t . The phase space has the samedimension as the degrees of freedom of the system and often is two, which cases itis called a phase plane.In classical mechanics and other fields, the phase space is obtained by plottingthe positions against the velocity (Caro-Mart´ın et al. (2018)). The phase planeassociated with an AS is obtained by plotting the real signal µ ( t ) against HT ( µ ( t )),and the angle at a given point is the IP.The system’s evolving state over time traces a trajectory through the phasespace. The trajectory of a periodic system, the image of the periodicity interval inthe state space, is a closed curve called the periodic orbit or cycle. Given a closedcurve and a point in the phase space, the winding number is the integer representingthe total number of times that curve travels counter-clockwise around a point inthe interior. The maximum value of the winding number can be interpreted as thenumber of oscillations within a period. A signal, typically monocomponent, withonly one oscillation, describes a closed orbit with maximum winding number of 1(Krantz (2012)).If a center point with maximum winding number is found, then the angle phasedefinition with respect to that point is an admissible phase definition. The maindrawback is that very often such a point is not easy to find. Some examples aregiven in the next section.
3. The
F M M m model: a novel decomposition approach Oscillatory signals are defined in the time domain and, without loss of generality,it is assumed that the time points are in [0 , π ]. In any other case, transform the novel wave decomposition for oscillatory signals time points t (cid:48) ∈ [ t , T + t ] by t = ( t (cid:48) − t )2 πT . In the following, oscillations are alsoreferred to as waves. Let, υ = ( A, α, β, ω ) (cid:48) be the four-dimensional parameters describing a single FMMsignal, defined as the following wave : W ( t, υ ) = A cos( φ ( t, α, β, ω )) , where A is thewave amplitude and, φ ( t, α, β, ω ) = β + 2 arctan( ω tan( t − α F M M m model is defined as a parametric additivem-component signal plus error model as follows; Definition 3.
F M M m modelFor the observations t < ... < t n , X ( t i ) = µ ( t i , θ ) + e ( t i ); (3) where, µ ( t i , θ ) = M + (cid:80) mJ =1 W ( t i , υ J ) , and, • θ = ( M, υ , ..., υ m ) verifiying: – M ∈ (cid:60) ; υ J ∈ Θ J = (cid:60) + × [0 , π ] × [0 , π ] × [0 , ; J = 1 , ..., m , – α ≤ α ≤ .... ≤ α m ≤ α – A = max ≤ j ≤ m A j • ( e ( t ) , ..., e ( t n )) (cid:48) ∼ N n (0 , σ I ) , The identifiability of the model parameters is guaranteed by including the artifi-cial restrictions above. The papers by Rueda et al. (2019) and Rueda et al. (2020)considering particular cases of this model, show the broad type of signals that themodel represents and provide parameter interpretation as well as some basic prop-erties.
Maximum Likelihood Estimator (MLE)
The MLE of the
F M M m model parameter are the solutions to the optimizationproblem: ˆ θ = arg min θ ∈ Θ n (cid:88) i =1 ( X ( t i ) − µ ( t i , θ )) , (4)where Θ refers to the parameter space for θ , a subset of R m +1 given by (cid:60) × Θ × ... × Θ m plus the restrictions. When the true parameter configuration verifies α J ∈ (0 , π ) , β J ∈ (0 , π ) , w J > J = 1 , ..., m , the standard regularity conditionson the response function are verified for F M M m and well known results in nonlinearnormal regression guarantee the consistency and asymptotic normality of the MLE Cristina Rueda, Alejandro Rodr´ıguez-Collado, Yolanda Larriba estimators. The main pitfall is how to find the MLE.A backfitting algorithm, Algorithm 1 below, is proposed to solve the optimizingproblem (4), which at each step, fits an
F M M model to the residue using thealgorithm designed by Rueda et al. (2019). This is repeated until the differencebetween the variability explained by the model in two consecutive steps is less thana constant C . C depends on the experiment and on the researcher.A measure of the variance proportion explained by the model is defined as follows: R = 1 − (cid:80) ni =1 ( X ( t i ) − µ ( t i , ˆ θ )) (cid:80) ni =1 ( X ( t i ) − X ) (5)being n the number of observed values. Algorithm 1:
MLE
F M M m ( θ ) estimation (a) Initialize for J = 1 , . . . , m : M = 1 n n (cid:88) i =1 X ( t i ); A J = 0 , α J = 5 , β J = π, ω J = 1; J = 1 , . . . , m (b) Do until R increases less than C , For each J ; J = 1 , ...m : υ J , ˆ M ← arg min υ J ∈ Θ J ; M ∈(cid:60) n (cid:80) i =1 ( X ( t i ) − (cid:80) I (cid:54) = J ˆ W ( t i , υ I ) − M − W ( t i , υ J )) A = max ≤ j ≤ m A j and α ≤ ... ≤ α m ≤ α µ ( t i , ˆ θ ) = ˆ M + (cid:80) mJ =1 W ( t i , ˆ υ J )2.4 Calculate R Success, in terms of convergence to the MLE from a given starting value, is notinitially guaranteed, although the solution converges in probability to a local mini-mum. Our experience fitting
F M M to real and simulated data indicates that thefailure of convergency does not likely happen. Moreover, the excellent performanceof the backfitting algorithm has been shown with the simulations results for the F M M model describing the ECG signal in Rueda et al. (2020). In this paper, wehave checked that this is also true in the particular case of F M M models describ-ing real action potential curves.The likelihood-based analysis of the F M M m model would benefit from the abilityto conduct hypothesis testing problems or derive confidence intervals. Specifically,assuming the F M M model, both hypothesis tests on arrhythmicity and the sinu-soidal shape are defined parametrically, see Rueda et al. (2019). Moreover, otherinteresting hypothesis testing problems can be defined depending on the problem athand. While the parameters that describe the hypothesis are conveniently chosen,it is straightforward to develop the likelihood ratio test and confidence intervals novel wave decomposition for oscillatory signals using such standard methods as bootstrap.For the rest of the paper, we refer to the F M M model or
F M M signal de-pending on whether (3) or µ ( t, θ ) is considered, that is the noise is consideredor not. In addition, the dependence of signals, waves, phases, and models onthe parameters is omitted when no confusion. Specifically, W J ( t ) = W ( t, υ J ), φ J ( t ) = φ ( t, α J , β J , ω J ), and µ ( t ) = µ ( t, θ ). Without loss of generality, we assume for an
F M M m signal that M = 0 for thediscussion in this section, as M can always be assigned to the component 0 where A = M and φ ( t ) = 0. F M M m signal In general, given a real signal µ ( t ), the associated AS does not have a closed expres-sion even when the signal has a simple expression as µ ( t ) = B cos( ψ ( t )), as couldbe expected. Examples in Picinbono (1997) illustrate this statement. However, for F M M m signals, the AS can be easily derived analytically, as shown in Theorem 1. Theorem 1.
Let µ ( t ) be an F M M m signal, the AS associated is S ( t ) = µ ( t ) + iν ( t ) , where, ν ( t ) = (cid:80) mJ =1 A J sin( φ J ( t )) . The proof follows, taking into account that HT ( (cid:80) mJ =1 W J ( t )) = (cid:80) mJ =1 HT ( W J ( t ))and that HT ( W J ( t )) = A J sin( φ J ( t )), the latter is shown in Rueda et al. (2019).Now, the AS phase is easily derived as the angle of the vector ( µ ( t ) , HT ( µ ( t ))with respect to (0 , F M M m signal, has an IF valid interpretation. Specifically, for the F M M model, a necessary condition for IF be interpreted as a non negative weighted av-erage of the IF’s of the two components, taking into account that A > A , is: A A ≥ − cos( φ ( t ) − φ ( t )) . For
F M M m with m >
2, the conditions for a valid IF are more demanding.Chavez et al. (2006) and Freitas et al. (2018) show scenarios, like those in Wei andBovik (1998), where the AS fails, corresponding to signals with more than one dom-inant oscillation. In this section, two examples are shown that illustrate how ASfails even in scenarios where there is an apparent single oscillation. First, in Figure2, the same signal in Figure 1 is considered, the phase space for that signal in Figure1(left) and the phase space for the same but centered signal in Figure 1(right), areplotted; quite different AS phases are defined from these two signals describing thesame system, as the center point (rotation center) is different. The second exampleis shown in Figure 3, where the real signal (experiment number 486754703, sweep Cristina Rueda, Alejandro Rodr´ıguez-Collado, Yolanda Larriba
17 from ACTD) is analysed; a
F M M model is fitted to the observed data (top),and the associated phase space (bottom right) and IF (bottom left) are plotted.Therefore, even when the signal exhibits a single dominant oscillation, the ASphase definition could fail and an alternative phase definition is needed; even moreconsidering that the calculation of the phase and IF are highly susceptible to back-ground noise.The simple analytical expression of the F M M model facilitates a proper androbust phase definition that is presented below as the
Dominant Phase . −0.04−0.020.000.020.040.06 −0.050 −0.025 0.000 0.025 m ( t ) n ( t ) −0.04−0.020.000.020.04−0.025 0.000 0.025 0.050 0.075 m ( t ) - m ( t ) n ( t ) -n ( t ) Figure 2.
The phase spaces, with the real and imaginary signals defining the AS, areplotted for µ ( t ) (left) and µ ( t ) − µ ( t ) (right) respectively. where, µ ( t ) is the signal in Figure1 and µ ( t ) is the mean. In some applications, a noticeable characteristic of the signal is the existence of adominant component, as is the case of AP from single neurons where the dominantcomponent corresponds to the moment when the neuron spikes (Wei and Bovik(1998)). The dominant component amplitude is expected to be much larger thanthose of other components, in such cases. Therefore, the signal phase, IA, and IFare approximately identical to those of the dominant component.These statements are the basis of the definition of the
Dominant Phase (DP), for
F M M m signals, Definition 4 (a) and the Dominant Peak Time (DPT), Definition4 (b). Moreover, the definitions for the
Dominant Instantaneous Frecuency (DIF)and the
Dominant Instantaneous Amplitude (DIA), are ˙Φ D ( t ) = ∂ Φ( t ) ∂t and A ,respectively. In the simple case, m = 1, the DP coincides with IP from the AS. Definition 4.
The dominant phase and the dominant phase peak . Let µ ( t ) bean F M M m signal where A = max ≤ J ≤ m A J (a) The dominant phase is defined as: Φ( t ) = φ ( t ) − β novel wave decomposition for oscillatory signals lllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllll −60−40−2000.000 0.005 0.010 0.015 Time (ms) M e m b r a n e P o t e n t i a l ( m V ) Time (ms) f(cid:215) ( t ) −20020−40 −20 0 20 m ( t ) n ( t ) Figure 3.
The plot on the top is a real oscillatory signal (grey) from the ACTD (experiment486754703, sweep 17) and the estimated
F M M signal in red ( µ ( t ) ). The plots on thebottom are µ ( t ) against HT ( µ ( t )) (right), and t against ˙ φ ( t ) = ∂φ ( t ) ∂t (left). (b) The dominant peak time is defined as: Φ peak = 2 arctan( ω tan( − β ))Some interesting properties are shown in Proposition 5. First, it is shown thatΦ( t ) increases monotonically with time, which makes the formulation physicallyinterpretable. Moreover, the derivatives of Φ( t ) and Φ P eak with respect to theparameters are given because can be useful to construct PRCs. We will return tothis question in section 4.
Proposition 5.
Let µ ( t ) be an F M M m signal and Φ( t ) as above, then :(a) (a) ∂ Φ( t ) ∂t = ω + − ω ω (1 − cos(Φ( t )) (b) ∂ Φ( t ) ∂α = − [ ω + − ω ω (1 − cos(Φ( t ))] (c) ∂ Φ( t ) ∂ω = ω sin(Φ( t )) (b) (a) ∂ Φ Peak ∂β = − [ ω + − ω ω (1 − cos(Φ P eak )] (b) ∂ Φ Peak ∂ω = − [ ω sin(Φ P eak )]The proof is deferred to the appendix. Cristina Rueda, Alejandro Rodr´ıguez-Collado, Yolanda Larriba
Note that the derivatives in Proposition 5 are formulated as functions of Φ( t ) andΦ P eak , respectively, an not only as function of time, which is useful in applications.More specifically, it is relevant for real practice to note that also − ∂ Φ( t ) ∂α , − ∂ Φ Peak ∂β and − ∂ Φ Peak ∂ω are non negative functions. F M M m signals Dynamical models describing the state of a system are frequently formulated interms of ODEs, very often in Neuroscience. The derivation of the ODE representa-tion of the
F M M signal is interesting to compare with alternative models and toshow other aspect of the dynamics that the signal describes.The problem is known as inverse problem for ODEs. Given a function signal,find an ODE f ( x, ˙ x, ¨ x, .., t ) = 0, ˙ x = ∂x ( t ) ∂t , that admits that signal as a solution.The results in this section are inspired by the work of Wigren (2015), where theconditions under which a periodic signal can be represented by an ODE of order k are derived. Specifically, it is shown that the minimal order depends on the minimaldimension in which the stable orbit of the system does not intersect itself. For an F M M signal, this dimension is two, as Theorem 2 shows.Moreover, using a change of variable, we derive a second order ODE associatedto the DP that describe phase dynamics (phase model). Theorem 2.
Let µ ( t ) be an F M M signal with ω > and z ( t ) = tan( Φ( t )2 ) , then(a) µ ( t ) is the solution to the following equation: ¨ x ( t ) = − x ( t ) φ ( t ) + ˙ x ( t ) ¨ φ ( t )˙ φ ( t ) (b) z ( t ) is the solution to the following equation: ˙ x ( t ) = ω + ω x ( t )The proof is deferred to the appendix.Furthermore, a system of ODE’s with an F M M m as a solution can be derived,as is done for instance in Wigren and S¨oderstr¨om (2005), using Theorem 2) andthe additive structure of F M M m . However, the minimum order of the ODE rep-resenting an F M M m model can not be predicted in advance, as it depends on theparameter configuration.
4. Applications in Neuroscience
Neuroscience can be defined as a multidisciplinary branch of biology that combinesphysiology, anatomy, molecular biology, mathematical modeling, and psychology tounderstand the nervous system. We deal here with neuron cells.Much of the mathematical treatment of the nervous system has its roots in thetheory of ODEs. It has been promoted for many years in the work of Winfree(2001), Holter et al. (2000), Kopell and Ermentrout (1986), Ermentrout (1981), novel wave decomposition for oscillatory signals and Izhikevich (2007), to name just a few. For a survey, we refer the reader to thebook by Ermentrout and Terman (2010). Models that describe nervous signals canbe classified as empirical or mechanistic; the former attempts to describe spikingoutput and are based on direct observation, while the latter attempts to describephysiological features and are based on an understanding of the behavior of a sys-tem’s components. The F M M m model is of the former class and the Hodking andHuxley of the latter. The empirical models are particularly useful to analyse in-vivo data.The estimation of the phase and other quantities associated with the systemdepend on the specific approach. In particular, a critical curve to be estimatedfrom experimental data is the PRC, also known as phase resting curve or phasesensitivity curve. There is no consensus on phase definition, and nor is there inthe estimation of the PRC from experimental data. The subject has received muchattention in the literature as PRCs are used for multiple proposes; for instance seeSchultheiss et al. (2011). More specifically, Oprisan (2017) , Shiju and Sriram (2019)and Rosenblum et al. (2018), propose using the AS to estimate PRCs.The F M M solves the construction of PRCs satisfactorily. Let us assume that aperturbation of a system can be represented by a change in one of the parameters;hence, the PRC can be obtained by estimating the derivative of the DP with respectto each of the parameters. Alternatively, it could be also interesting to measurechanges in DPT, the PRCs could then be derived by calculating the relative changesin the DPT. The two types of PRCs documented in the literature are: Type-I(positive, phase advanced) and Type -II (positive and negative, advanced, and delayphase), see Ko and Ermentrout (2009). Proposition 5 shows how the two types ofthe functional forms, Type I and Type II, arise depending on which parameter ischanging.Furthermore, if the derivative with respect to t is considered to calculate thePRC, the ODE derived in Theorem 2 (b) shows that the model associated to theDP is closely related to the theta model , also known as the the Ermentrout-Kopellcanonical model (Ermentrout (1996)). Specifically, the F M M model is equivalentto this latter model, when ω = 0 .
5, assuming ω = I , where I is the stimulusintensity. Therefore, when the DP is considered, only phase advanced are producedby a perturbation. Accordingly, the classification in Type I or Type II modelsdepends on when the DP is adopted or not; which, in turns, depends on the user, thenumber of components and the variability explained by the dominant component.Whatever the definition of PRC is chosen, the F M M approach simplifies theestimation process because the PRC is formulated parametrically.Regarding the AP curve, Hodking and Huxley and other ODE models have beenextensively used to fit AP from in vitro data. However, the models are not use-ful for experimental or in vivo data, as is explained and illustrated for instance inNaundorf et al. (2006). The FMM approach achieves a quasi perfect fit for differentAP patterns, as can be seen with the numerical analysis below.Other interesting applications are mentioned in the discussion section. Cristina Rueda, Alejandro Rodr´ıguez-Collado, Yolanda Larriba
Table 1. R ’s mean (standard deviation) across dentrite type andspecies. Dendrite Type Species R R R R Inhibitory Human
Mouse
Excitatory Human
Mouse
The ACTD includes morphological and electrophysiological data collected from in-dividual human or mouse recordings of high temporal resolution time series of mem-brane potential. The APs from the first 500 recorded neurons, using the short squarestimulus and the lowest stimulus amplitude generating a spike, have been analysed.The time needed by the neuron to spike following the application of the stimulus( d ) is used to delimit the segment containing the AP, which is defined as [ t S − d, t S +3 d ] with t S denoting the time of the spike. This uneven cut is done to capture theasymmetry of the AP, as the depolarization happens much faster than the rest ofits stages. The number of observations, depends on the experiment, ranging from500 to 4500. Neurons from two species, human and mouse are analysed. Accordingto the dentrite type, neurons are classified as inhibitory or excitatory. 18% arehuman neurons (22% of them inhibitory) and 82% mouse neurons (49% of theminhibitory).The F M M , Spline, FD, and F M M models have been fitted to the signals.The Spline and FD models fitted are comparable to the F M M in complexity.Therefore, as the F M M model has 13 parameters, a 13 df (degrees of freedom)Spline and an FD with six harmonics, have been considered. Figure 4 shows the APfor inhibitory and excitatory neurons from humans and mice with different patterns.Consider the measure R defined in (5) as a measure of the goodness of fit of amodel.Among the four models fitted to the data, the highest R is always that of F M M . In most cases, R is also higher for F M M than for the Spline or FDmodels. Table 1 gives R means and standard deviations across types and species.As Figure 4 and numbers in Table 1 illustrate, most signals are quite well rep-resented with an F M M model, in particular inhibitory neurons. The latter is aninteresting fact as F M M is a much more simple model with 5 df, as against the13 df of the other three models.The ability of the parameters to characterize different transgenic lines or theirpotential in supervised classification is beyond the scope of this paper and will bepart of our future research. An insight of the potential of the F M M parameters todiscriminate cell types is given in Figure 5, which shows how the DPT distribution novel wave decomposition for oscillatory signals lllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllll llllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllll ( g ) Experiment : excitatory human cell ( h ) Experiment : inhibitory human cell ( e ) Experiment : excitatory human cell ( f ) Experiment : inhibitory human cell ( c ) Experiment : excitatory mouse cell ( d ) Experiment : inhibitory mouse cell ( a ) Experiment : excitatory mouse cell ( b ) Experiment : inhibitory mouse cell −75 −50−25025−40040−75−50−25025−75−50−25025 −80 −40040 l Observed
Voltage FMM FMM FD Spline
Figure 4.
AP extracted from different experiment from ACTD along with the fitted sig-nal using Spline with 13 df (green), FD with six harmonics (yellow),
F M M (blue), and F M M (red).6 Cristina Rueda, Alejandro Rodr´ıguez-Collado, Yolanda Larriba differs across Species and dendrite types. D o m i n a n t P e a k T i m e Figure 5.
Distribution of the Dominant Peak Time across dendrite type and species. .
5. Discussion
In this paper, the
F M M approach is presented as a multi-purpose approach withsolid mathematical and statistical support. It provides a decomposition of a peri-odic signal in several components, with a parametric formulation that facilitates theinterpretability and the derivation of essential elements. Moreover, the ODE repre-sentation for the
F M M signal captures the dynamics, and, on the statistical side,the estimation algorithm and other inference tools allow the analysis of observedsignals in the presence of noise.From the applied side, AP curves from neuron synapses have been analysed usingthe
F M M approach and questions related to the PRC estimation have been ad-dressed. However, many questions still remain open in Neuronal Dynamics. Alongwith the performance of the proposed PRC estimators in real practice, we can cite,the potential of the model parameters to define synchronization measures (Aydore novel wave decomposition for oscillatory signals et al. (2013)). Moreover, we have focused here on the AP of neurons; however,nerves and muscle and other AP from extracellular recordings are also of great in-terest. Specifically, three basic waveforms can be defined: monophasic, biphasic ortriphasic, based on where the recording electrode is placed (Raghavan et al. (2019)).The F M M parameters can accurately discriminate between these patterns.Actually, the algorithmic extraction and categorization of the distinct AP is oneof the most exciting problems in data analysis in neurophysiology. It is known as spike sorting , and has been generating much attention recently in the literature (Reyet al. (2015), Caro-Mart´ın et al. (2018), Teeter et al. (2018), Souza et al. (2019),R´acz et al. (2020) to mention only a few). The
F M M parameters could contributeefficiently to solving the problem of feature extraction in the classification process.Furthermore, there are multiple questions related to other electrophysiologicalsignals, such as the EEG signal and other brain signals, where the
F M M approachcould be useful. More specifically, phase related quantities have been widely used inthe analysis of cerebral disorders, as is illustrated in Sameni and Seraj (2017) andAtallah and Scanziani (2009), among others.Finally, there are many other fields with a tradition in signal analysis wherethe
F M M as a model or decomposition approach could be useful too, starting byproviding a kind of bandwidth filtering.
Acknowledgments
The authors gratefully acknowledge the financial support received by the Span-ish Ministerio de Ciencia e Innovaci´on and European Regional Development Fund;Ministerio de Econom´ıa y Competitividad grant [MTM2015-71217-R and PID2019-106363RB-I00 to CR and YL].
6. Appendix
Proof of Proposition 5
From (2), we have that, tan( Φ( t )2 ) = ω tan( t − α ( θ ) = ( θ ) , we have that, ∂ Φ( t ) ∂t = ω ( t − α ) ω tan ( t − α ) = 1 ω ω + ω tan ( t − α )1 + ω tan ( t − α ). Cristina Rueda, Alejandro Rodr´ıguez-Collado, Yolanda Larriba
Now, from (6) and the last statement it follows that, ∂ Φ( t ) ∂t = 1 ω ω + tan ( Φ( t )2 )1 + tan ( Φ( t )2 ) = ω cos ( Φ( t )2 ) + 1 ω sin ( Φ( t )2 ) . (7)Finally, 1.(a) is the result of applying the trigonometric equalities: cos ( θ ) = θ and sin ( θ ) = − cos θ to the right hand of (7), as follows: ∂ Φ( t ) ∂t = ω t ))) + 12 ω (1 − cos(Φ( t ))) = ω + 1 − ω ω (1 − cos Φ( t ))Proposition 5, 1.(b) is proved in a similar way provided that ∂ Φ( t ) ∂α = − ∂ Φ( t ) ∂t .Proposition 5, 1.(c) is also proved using similar arguments as above and theequality, sin(2 θ ) = 2 sin( θ ) cos( θ ), as follows: ∂ Φ( t ) ∂ω = 2 tan ( t − α )1 + ω tan ( t − α ) = ω tan( Φ( t )2 )1 + tan ( Φ( t )2 ) = 2 ω sin( Φ( t )2 ) cos( Φ( t )2 ) = 1 ω sin(Φ( t )) . In addition, Proposition 5, 2.(a) and 2.(b) follow in a similar way and the proofsare left to the reader.
Proof of Theorem 2 .On the one hand, we have that˙ φ ( t ) = ω ( t − α ) ω tan ( t − α ) = ω cos ( t − α ) + ω sin ( t − α ) , which implies that ˙ φ ( t ) > ω > x ( t ) = A cos( φ ( t )), then, ˙ x ( t ) = − A sin( φ ( t )) ˙ φ ( t ),which implies A sin( φ ( t )) = − ˙ x ( t )˙ φ ( t ) , (8)where ˙ φ ( t ) > ω > x ( t ) = − A cos( φ ( t )) ˙ φ ( t ) − A sin( φ ( t )) ¨ φ ( t ) = − x ( t ) ˙ φ ( t ) + ˙ x ( t ) ¨ φ ( t )˙ φ ( t )Finally, let be x ( t ) = tan( Φ( t )2 ), now using (6) it easily to show that˙ x ( t ) = ω ( t − α ) = ω ( t − α ω ω x ( t )and Theorem 2 (b) follows. novel wave decomposition for oscillatory signals References
Ashwin, P., Coombes, S., and Nicks, R. (2016). Mathematical frameworks foroscillatory network dynamics in neuroscience.
The Journal of Mathematical Neu-roscience , 6(1):2.Atallah, B. and Scanziani, M. (2009). Instantaneous modulation of gamma oscilla-tion frequency by balancing excitation with inhibition.
Neuron , 62(4):566–577.Aydore, S., Pantazis, D., and Leahy, R. (2013). A note on the phase locking valueand its properties.
Neuroimage , 74:231–244.Boashash, B. (2016).
Time-Frequency Signal Analysis and Processing: A Compre-hensive Reference . Elsevier Science.Caranica, C., Cheong, J., Qiu, X., Krach, E., Deng, Z., Mao, L., Sch¨uttler, H., andArnold, J. (2019). Focus: Clocks and cycles: What is phase in cellular clocks?
The Yale journal of biology and medicine , 92(2):169.Caro-Mart´ın, C., Delgado-Garc´ıa, J., Gruart, A., and S´anchez-Campusano, R.(2018). Spike sorting based on shape, phase, and distribution features, and k-tops clustering with validity and error indices.
Scientific reports , 8(1):1–28.Chavez, M., Besserve, M., Adam, C., and Martinerie, J. (2006). Towards a properestimation of phase synchronization from time series.
Journal of Neurosciencemethods , 154(1-2):149–160.Deng, Z., Arsenault, S., Mao, L., and Arnold, J. (2016). Measuring synchronizationof stochastic oscillators in biology. In
Journal of Physics: Conference Series ,volume 750, page 1. IOP Publishing.Ermentrout, G. (1981). n: m phase-locking of weakly coupled oscillators.
Journalof Mathematical Biology , 12(3):327–342.Ermentrout, G. (1996). Type i membranes, phase resetting curves, and synchrony.
Neural computation , 8(5):979–1001.Ermentrout, G. and Terman, D. (2010).
Mathematical foundations of neuroscience ,volume 35. Springer Science & Business Media.Freitas, L., Torres, L., and Aguirre, L. (2018). Phase definition to assess synchro-nization quality of nonlinear oscillators.
Physical Review E , 97(5):052202.Gabor, D. (1946). Theory of communication. part 1: The analysis of information.
Journal of the Institution of Electrical Engineers-Part III: Radio and Communi-cation Engineering , 93(26):429–441.Gerstner, W., Kistler, W., Naud, R., and Paninski, L. (2014).
Neuronal dynamics:From single neurons to networks and models of cognition . Cambridge UniversityPress. Cristina Rueda, Alejandro Rodr´ıguez-Collado, Yolanda Larriba
Hodgkin, A. and Huxley, A. (1952). A quantitative description of membrane cur-rent and its application to conduction and excitation in nerve.
The Journal ofphysiology , 117(4):500–544.Holter, N., Mitra, M., Maritan, A., Cieplak, M., Banavar, J., and Fedoroff, N.(2000). Fundamental patterns underlying gene expression profiles: Simplicityfrom complexity.
Proceedings of the National Academy of Sciences of the UnitedStates of America , 97(15):8409–8414.Izhikevich, E. (2007).
Dynamical systems in neuroscience . MIT press.Ko, T. and Ermentrout, G. (2009). Phase-response curves of coupled oscillators.
Physical Review E , 79(1):016211.Kopell, N. and Ermentrout, G. (1986). Symmetry and phaselocking in chains ofweakly coupled oscillators.
Communications on Pure and Applied Mathematics ,39(5):623–660.Kowalski, M., Meynard, A., and Wu, H. (2018). Convex optimization approachto signals with fast varying instantaneous frequency.
Applied and ComputationalHarmonic Analysis , 44(1):89–122.Krantz, S. (2012).
Handbook of complex variables . Springer Science & BusinessMedia.Larriba, Y., Rueda, C., Fern´andez, M., and Peddada, S. (2019). Order restrictedinference in chronobiology.
Submited .Lin, C., Su, L., and Wu, H. (2018). Wave-shape function analysis.
Journal ofFourier Analysis and Applications , 24(2):451–505.Mensi, S., Naud, R., Pozzorini, C., Avermann, M., Petersen, C., and Gerstner,W. (2012). Parameter extraction and classification of three cortical neurontypes reveals two distinct adaptation mechanisms.
Journal of neurophysiology ,107(6):1756–1775.Naundorf, B., Wolf, F., and Volgushev, M. (2006). Unique features of action po-tential initiation in cortical neurons.
Nature , 440(7087):1060–1063.Oprisan, S. (2017). A consistent definition of phase resetting using hilbert transform.
International scholarly research notices , 2017.Osipov, G., Hu, B., Zhou, C., Ivanchenko, M., and Kurths, J. (2003). Three typesof transitions to phase synchronization in coupled chaotic oscillators.
PhysicalReview Letters , 91(2):024101.Picinbono, B. (1997). On instantaneous amplitude and phase of signals.
IEEETransactions on Signal Processing , 45(3):552–560. novel wave decomposition for oscillatory signals Pikovsky, A. and Rosenblum, M. (2015). Dynamics of globally coupled oscillators:Progress and perspectives.
Chaos: An Interdisciplinary Journal of NonlinearScience , 25(9):097616.R´acz, M., Liber, C., N´emeth, E., Fi´ath, R., Rokai, J., Harmati, I.and Ulbert, I.,and M´arton, G. (2020). Spike detection and sorting with deep learning.
Journalof Neural Engineering , 17(1):016038.Raghavan, M., Fee, D., and Barkhaus, P. (2019). Generation and propagation ofthe action potential. In
Handbook of clinical neurology , volume 160, pages 3–22.Elsevier.Rey, H., Pedreira, C., and Quiroga, R. (2015). Past, present and future of spikesorting techniques.
Brain research bulletin , 119:106–117.Rosenblum, M. et al. (2018). Inferring the phase response curve from observationof a continuously perturbed oscillator.
Scientific reports , 8(1):1–10.Rueda, C., Larriba, Y., and Lamela, A. (2020). The hidden wave in the ecg uncov-ered: a sound automated interpretation method. https://arxiv.org/ , 1(1):1.Rueda, C., Larriba, Y., and Peddada, S. (2019). Frequency modulated m¨obiusmodel accurately predicts rhythmic signals in biological and physical sciences.
Scientific Reports. , 9(1):1–10.Sameni, R. and Seraj, E. (2017). A robust statistical framework for instantaneouselectroencephalogram phase and frequency estimation and analysis.
Physiologicalmeasurement , 38(12):2141.Sandoval, S. and De Leon, P. (2015). Theory of the hilbert spectrum. arXiv .Sandoval, S. and De Leon, P. (2018). The instantaneous spectrum: A generalframework for time-frequency analysis.
IEEE Transactions on Signal Processing ,66(21):5679–5693.Schultheiss, N. W., Prinz, A., and Butera, R. (2011).
Phase response curves inneuroscience: theory, experiment, and analysis . Springer Science & BusinessMedia.Shiju, S. and Sriram, K. (2019). Multi-scale modeling of the circadian modulationof learning and memory.
PloS one , 14(7).Souza, B. C., Lopes-dos Santos, V., Bacelo, J., and Tort, A. B. (2019). Spike sortingwith gaussian mixture models.
Scientific reports , 9(1):1–14.Teeter, C., Iyer, R., Menon, V., Gouwens, N., Feng, D., Berg, J., Szafer, A., Cain,N., Zeng, H., Hawrylycz, M., et al. (2018). Generalized leaky integrate-and-firemodels classify multiple neuron types.
Nature communications , 9(1):1–15. Cristina Rueda, Alejandro Rodr´ıguez-Collado, Yolanda Larriba
Trainito, C., von Nicolai, C., Miller, E., and Siegel, M. (2019). Extracellular spikewaveform dissociates four functionally distinct cell classes in primate cortex.
Cur-rent Biology , 29(18):2973–2982.Wei, D. and Bovik, A. (1998). On the instantaneous frequencies of multicomponentam-fm signals.
IEEE Signal Processing Letters , 5(4):84–86.Wigren, T. (2015). Model order and identifiability of non-linear biological sys-tems in stable oscillation.
IEEE/ACM transactions on computational biology andbioinformatics , 12(6):1479–1484.Wigren, T. and S¨oderstr¨om, T. (2005). A second order ode is sufficient for modellingof many periodic signals.
International Journal of Control , 78(13):982–996.Winfree, A. (2001).