Bayesian approach to uncertainty quantification for cerebral autoregulation index
BBayesian approach to uncertainty quantificationfor cerebral autoregulation index
Kevin P. O’Keeffe
Senseable City Lab, Massachusetts Institute of Technology, USA
Adam Mahdi
Department of Engineering Science, University of Oxford, UK
Abstract
Cerebral autoregulation refers to the brain’s ability to maintain cerebral blood flowat an approximately constant level, despite changes in arterial blood pressure. Theperformance of this mechanism is often assessed using a ten-scale index called the ARI(autoregulation index). Here, 0 denotes the absence of, while 9 denotes the strongest,autoregulation. Current methods to calculate the ARI do not typically provide errorestimates. Here, we show how this can be done using a bayesian approach. We useMarkov-chain Monte Carlo methods to produce a probability distribution for the ARI,which gives a natural way to estimate error.
Contents
The brain is a extraordinary system of supply and demand. It requires a rich supply ofoxygen, accounting for approximately one-fifth of the total bodily intake. This demand foroxygen is supplied by virtually constant cerebral blood flow. Interruptions to this bloodflow can have dire consequences, leading to haemorrhage, embolisms, and aneurysms. Inorder to function properly, the brain must therefore have the ability to maintain approx-imately constant cerebral blood flow rate (CBFV), in spite of changes in arterial bloodpressure (ABP). This ability is known as cerebral autoregulation (CA) [1].The first experimental demonstration of CA was performed by Lassen [12], who derivedthe triphasic autoregulation curve from CBFV measurements in different human studies1 a r X i v : . [ phy s i c s . m e d - ph ] M a y single ARI, whose width providesthe desired estimate of the error in the ARI.
Data collection.
The ABP and CBFV data, used in this study, have been been previ-ously used (Hebrew Rehabilitation Center for Aged, Boston, MA) [14]. ABP was measurednoninvasively using a photoplethysmographic Finapres monitor (Ohmeda Monitoring Sys-tems, Englewood, CO). In order to eliminate hydrostatic pressure effects, the subject’snondominant hand was supported by a sling at the level of the right atrium. The individ-uals were asked to breath at the rate of 15 breaths per minute to minimise the effects ofrespiration. Doppler ultrasonography was used to measure the changes in CBFV within theMCA. The 2 MHz probe of a portable Doppler system (MultiDop X4, DWL-TranscranialDoppler Systems Inc., Sterling, VA) was strapped over the temporal bone and locked inposition with a Mueller-Moll probe fixation device to image the MCA. Flow velocity wasrecorded at a depth of approximately 50-65 mm and digitised and stored for analysis. Fora complete description of the data collection procedure see [14].
Data preprocessing.
The pulsatile ABP and CBFV were filtered using zero-phase8th-order Butterworth low pass filter with the cutoff frequency of 20 Hz (see [22]). Subse-quently, the beginning and end of each cardiac cycle were marked by the onset of the sys-tole using blood pressure signal. The onsets were detected using a windowed and weightedslope sum function and adaptive thresholding [25]. The mean of ABP and CBFV werecalculated for each detected cardiac cycle. A first order polynomial was used to interpolatethe resulting beat-to-beat time series, which was followed by downsampling at 10 Hz toproduce signals with uniform base. Examples of preprocessed data are shown in Figure 1.2 t ( s ) P ( mmHg ) V ( cm / s ) Figure 1: Time series of a patient’s arterial blood pressure P and cerebral blood flowvelocity V . The data have been collected and processed as described in Section 2.1. Here we describe the original Aaslid-Tiecks method [1] to measure the ARI. We denoteby P [ t ] and V [ t ] the two time series of ABP and CBFV, respectively, of length N andindexed by t . Let P m and V m denote the mean value of P [ t ] and V [ t ] for the entire intervalof interest. Initially, the time-varying ABP signal P [ t ] is normalised as follows dP [ t ] = P [ t ] − P m P m − P cr , (1)where P cr = 12 mmHg is the critical closing pressure [24]. The following system of differ-ence equations is used to compute the intermediate quantities x , x as x [ t ] = x [ t −
1] + dP [ t − − x [ t − f Tx [ t ] = x [ t −
1] + x [ t − − Dx [ t − f T (2)where f , D and T are the sampling frequency, damping factor, and time constant param-eters, respectively. The modelled CBFV, denoted by ˆ V [ t ], is computed asˆ V [ t ] = V m (1 + dP [ t ] − Kx [ t ]) , (3)where K is a parameter representing autoregulatory gain. Note that in order to start theprocess at steady-state the initial conditions can be selected as x (0) = 2 D dP [0] , x (0) = dP [0] . (4) CA assessment.
Following [24], in Table 1 we list a combinations of ten different valuesof (
T, D, K ) are used to generate ten models corresponding to various grades of autoreg-ulation, ranging from 0 (absence of autoregulation) to 9 (strongest autoregulation).3
RI 0 1 2 3 4 5 6 7 8 9 T D K V j ( t ), where j = 0 , . . . ,
9, denote the response of the model for the j th combinationsof the parameters ( T, D, K ). The ARI is determined by selecting the model producing thesmallest root-mean-square (RMS) error:ARI std = min j ∈{ ,... } (cid:13)(cid:13)(cid:13) ˆ V j ( t ) − V [ t ] V m (cid:13)(cid:13)(cid:13) . (5)The results of ARI are interpolated by cubic splines to include fractional values of theARI . We use the subscript ‘standard’ in the above definition of the ARI to distinguish itfrom the alternate definition we propose in the subsequent section. We briefly describe the theory behind bayesian inference before applying it to our model.Assume we have a data set X and a model for this data, Y ( θ ), which depends on someparameters θ = { θ , θ , .... } . In a traditional parameter fitting we would define a costfunction C ( θ ) = (cid:80) j || Y j ( θ ) − X j || , with some suitable norm || . || (The L norm beingthe most popular). Then the best fit parameters θ ∗ are those which minimise this costfunction: θ ∗ = min θ C ( θ ).How certain are we of θ ∗ ? Bayesian inference answers this question by providing aprobability distribution P ( θ | X ) for the parameters θ given the data X , which quantifiesthis certainty. It can be computed using Bayes’ theorem, P ( θ | X ) = P ( X | θ ) P ( θ ) P ( X ) . (6)The term P ( θ ) is the prior distribution of the parameters θ . This quantifies our belief ofwhat the parameters θ could be, before observing the data X . If we have no prior beliefs, auniform distribution is often used so that each parameter value is equally likely. The term P ( X | θ ) is the ‘likelihood’, the probability of measuring the data X given the parameter θ . It is the asserted relationship between the model and the data. This is chosen by themodeller, and is dictated by the data in question. For parameter inference problems thelikelihood is typically taken to be Gaussian.The last term P ( X ) is often called the ‘evidence’, and can be thought of as a normalisa-tion constant. It can be expressed in terms of the prior and likelihood: (cid:82) P ( X | θ ) P ( θ ) dθ .If we could compute this integral then we would have our desired distribution P ( θ | X )via (6). However, only in the simplest cases is this doable analytically, and so numericalapproximations must be used. Markov Chain Monte Carlo . There is a way to sample from the desired distribution P ( θ | X ) without ever computing the evidence P ( X ). This is known as the Markov ChainMonte Carlo (MCMC) method, which we use in this work. There are many resources42, 11, 23] which derive the theory behind MCMC, but for convenience, we give a briefdescription here, before returning to the ARI problem.MCMC’s basic idea is to construct a Markov chain whose limiting distribution is P ( θ | X ), our sought after distribution. A Markov chain is uniquely defined by a collectionof states, y , and transition probabilities between these states, P ( y (cid:48) | y ), which do not dependon time (Note, in the context of bayesian parameter inference, these states correspond topoints in the conditional parameter space θ | X ). At each time-step we move from a state y to a new state y (cid:48) with probability P ( y (cid:48) | y ). As time evolves the Markov-chain is describedby P ( y, t | y ), the probability to be in state y at time t having started in the state y .For certain transition probabilities P ( y (cid:48) | y ), there exists a stationary distribution π ( y ) = lim t →∞ P ( y, t | y ) . (7)A sufficient condition for the existence of this state is P ( y (cid:48) | y ) π ( y ) = P ( y | y (cid:48) ) π ( y (cid:48) ) . (8)The above condition is known as “detailed balance”, and can be thought of as a no netflux condition. Thus, if detailed balance can be satisfied (8), then Markov chain willhave a stationary distribution P ( θ | X ), our goal. Our task is then to choose transitionprobabilities P ( y (cid:48) | y ) that satisfy (8). But how do we choose these? Metropolis-Hastings algorithm . Many different choices can be made, each of whichcorresponds to a different MCMC method. Common choices are the Metropolis-Hastingand Gibb’s sampling algorithms. In this paper we use the Metropolis-Hastings algorithm,in which the transition probabilities are P ( y (cid:48) | y ) = g ( y (cid:48) | y ) A ( y (cid:48) | y ) . (9)The density g ( y (cid:48) | y ) is the proposal distribution , and is typically taken to be a Gaussian.It proposes a transition to a new state y (cid:48) . This proposal is then accepted or rejected withprobability A ( y (cid:48) | y ), a quantity accordingly called the acceptance distribution . Requiringdetailed balance constrains A ( y (cid:48) | y ) as follows A ( y (cid:48) | y ) A ( y (cid:48) | y ) = P ( y (cid:48) ) P ( y ) g ( y (cid:48) | y ) g ( y | y (cid:48) ) , (10)which we obtained by plugging (9) into equation (7) for the stationary distribution π ( y ).Thus our problem is reduced to choosing A ( y (cid:48) | y ) such that (10) is obeyed. The algorithmis then defined by the choice A ( y (cid:48) | y ) = min (cid:16) , π ( y (cid:48) ) π ( y ) g ( y (cid:48) | y ) g ( y | y (cid:48) ) (cid:17) . (11)We now translate the above results into the notation of the inference problem: thestates of the Markov chain correspond to parameter values conditioned on our data y → θ | X , while the stationary distribution is the posterior distribution π ( y ) → P ( θ | X ). Then, A ( y (cid:48) | y ) = min (cid:16) , P ( X | θ (cid:48) ) P ( X | θ ) g ( y (cid:48) | y ) g ( y | y (cid:48) ) (cid:17) . (12)Using Bayes’ theorem (6) we can replace the ratio P ( X | θ (cid:48) ) P ( X | θ ) with the right-hand side of (6)to obtain the final result 5 ( y (cid:48) | y ) = min (cid:16) , P ( θ (cid:48) | X ) P ( θ (cid:48) ) P ( θ | X ) P ( θ ) g ( y (cid:48) | y ) g ( y | y (cid:48) ) (cid:17) . (13)Although the theory behind MCMC is somewhat involved, using it as a tool is straight-forward. The user simply specifies the likelihood P ( X | θ ) and a prior P ( θ ) as input, andthen uses a software package to run the MCMC algorithm, which returns the desired pos-terior P ( θ | X ). There are many packages available for this purpose. We used python’s“PyMC”. We recommend the e-book “Bayesian methods for hackers” [6] for a hands-onintroduction to PyMC. It is difficult to apply bayesian methods directly to the ARI as formulated above in (5).We thus propose an alternate definition of the ARI which does not have this drawback,and gives similar results as the original definition (5). Since this new ARI is calculatedusing an inverse method, we denote it as ARI inv .The first step in calculating ARI inv is to fit ˆ V [ t ] to the known time series V [ t ]. Recallthat ˆ V [ t ] is determined by the pressure time series P [ t ], and the parameters T, D, K ˆ V t = F ( P [ t ] , T, D, K ) . (14)where F ( · ) represents equation (3) with equations (1) and (2) inserted. We find the bestfit parameters ( T ∗ , D ∗ , K ∗ ) by minimizing the following cost function C ( T, D, K ) = (cid:13)(cid:13)(cid:13) V t − ˆ V t (cid:13)(cid:13)(cid:13) , (15)where || . || denotes the L norm. We then consider the quantity V ∗ [ t ], which is theresponse of ˆ V [ t ] to a step function pressure drop, H ( t ), at the best fit parameters, V ∗ [ t ] := F (cid:104) P ( t ) = H ( t ) , T ∗ , D ∗ , K ∗ (cid:105) . (16)ARI inv is then found by comparing the asymptotic value of ˜ V to that of the reference flowrates ˆ V j ARI inv = min j ∈{ ,... } (cid:13)(cid:13)(cid:13) ˆ V j [ t → ∞ ] − V ∗ [ t → ∞ ] V m (cid:13)(cid:13)(cid:13) . (17)As before the above expression can be extended to fractional values by appropriate scaling.The rationale behind defining ARI inv this way is illustrated in Figure 2, where we showthe response of the reference flow rates ˆ V j to a step function drop in pressure. Recall thatperfect autoregulation means the blood flow rate ˆ V remains constant in spite of changesin blood pressure. In the context of a step function drop in pressure, then, ˆ V shouldbriefly change as the pressure drops, before returning to its original value. As can beseen, the reference flow rate ˆ V has this behaviour, correctly identifying that an ARI of9 denotes perfect autoregulation. Similarly, the remaining reference velocities ˆ V j< whichby definition have lower ARI’s, asymptote to gradually diminished values. Thus, there isa correspondence between the asymptotic value of ˜ V (in response to a step function dropin pressure) and the ARI, which motivates the definition of ARI inv given by (17).6 t ( s ) P ( mmHg ) t ( s ) V j ( cm / s ) j = ARI = j = ARI = j = ARI = j = ARI = j = ARI = j = ARI = j = ARI = j = ARI = j = ARI = j = ARI = Figure 2: Response of reference cerebral blood flow rates, ˆ V j , to a step function drop inpressure. As shown in panel (a), the drop in pressure has magnitude 50 mmHg and occursat t = 5 s. Panel (b) shows the response of the reference blood flow rates ˆ V j for j = 0 , . . . , . . . T, D, K which correspond to each ARI). As can be seen, ˆ V is temporally diminished after thedrop in pressure at t = 5s, but eventually returns to its ‘pre-drop’ value, indicating perfectauto-regulatory performance: ARI = 9. On the other hand ˆ V shows no recovery afterthe drop in pressure, which corresponds to a lack of autoregulation: ARI = 0. The valuesbetween 0 and 9 then interpolate between these two extremes. We now show how bayesian inference can be used to estimate the error in ARI inv . Asdescribed above, the definition of ARI inv (17) requires finding the best fit parameters( T ∗ , D ∗ , K ∗ ). Hence, a single parameter set ( T, D, K ) corresponds to a single ARI inv . A distribution of parameter sets would thus correspond to a distribution of ARI inv . We usebayesian inference for this purpose and find a distribution over the best fit parameters, P ( T, D, K | V ) (conditioned on the data V ), which quantifies our uncertainty in their values.In turn, the distribution P ( T, D, K | V ) can be used to find a distribution of ARI inv values,whose width quantifies the error in ARI inv .We aim to find a distribution of parameters of best fit given our data set P ( T, D, K | V ),which using Bayes’ theorem, becomes P ( T, D, K | V ) = P ( V | T, D, K ) · P ( T, D, K ) P ( V ) . (18)We made the following choices for the likelihood and prior P ( V | T, D, K ) = exp (cid:104) − ( V − ˜ V ) / σ (cid:105) (19) P ( T, D, K, σ ) = Uni(0 ,
2) Uni(0 , .
0) Uni(0 ,
1) Gamma(1 , . (20)We chose a Gaussian likelihood, which is equivalent to assuming the noise in our system isnormally distributed with zero mean and standard deviation σ . There are two approachesto estimating σ . The first is that the modeler specifies an estimate from their knowledgeof the physics of the problem. We were unsure how to do this for the blood flow rate timeseries V , since it has been pre-processed, which smoothes the noise.We therefore opted for the second approach, which is to treat σ as an additional, un-known parameter to be estimated. For its prior, we chose the gamma distribution. This is atwo parameter distribution whose probability density function is f ( x ) = Γ( α ) − β α x α − e − βx x, α, β > σ . The choices of ( α, β ) = (1 ,
1) were arbitrary.To check they didn’t influence the results, we ran our simulations for α = 0 . , . , . . . . β = 0 . , − . , . . . .
0, and found no significant differences in the computed values.We took the priors on the remaining parameters to be uniform distributions on theinterval ( a, b ). The end points of the interval were chosen as the maximum and minimumvalues of the parameters in the reference ARI table, with the exception of P ( D ), whichhas min and max values 0 , .
5. We found that values of 1 . < D < . V . To avoid this unphysical behaviour, we truncated the interval to (0 , . TDK σ Figure 3: Posterior distributions of various parameters as computed by MCMC. 5 × sample were taken, with a 50% burn-in and a thinning parameter of 10. Left: Marginaldistribution of parameters T, D, K . Right: Marginal distribution of noise parameter σ .Having specified the likelihood and priors we next computed the following posterior P ( T, D, K, σ | V ). We took 5 × samples, 50% burn-in, and a thinning parameter of 10(these are numerical parameters required by PyMC). In Figure 3 we show the marginaldistributions of each parameter. ARI
ARI std
ARI inv
Figure 4: Distribution of ARIs resulting from the posterior distributions P ( T, D, K, σ | V ).The ARI values computed using the inverse (equation (17)) and standard (equation (5))methods are also shown for reference.We next computed a distribution of ARIs by plugging each triplet ( T, D, K ) in theposterior distribution P ( T, D, K, σ | V ) into equation (17) for ARI inv . The result is shownin Figure 4. A peaked distribution is evident, whose mean and width (defined as half8he difference of the maximum and minimum values) finally give us the desired value anderror of the ARI. We show the ARI computed from the standard and inverse methods forreference. As can be seen, the bayesian result in consistent with the others.ARI std = 3 . inv = 3 . bay = 3 . ± . . (21) The ARI is used to assess dynamic CA performance. Previous methods computed a singleARI for a given patient. In this work, we showed how bayesian methods could be usedto attach an error to a given ARI. We did this by using the Markov chain Monte Carlomethod (in concert with an alternate definition of the ARI) to produce a probabilitydistribution for a single ARI. We then used the width of this distribution to estimate thedesired uncertainty in an ARI measurement.The connection between impaired CA and certain disorders have been widely dicussedin the literature. Certain brain disorders, such as stroke, subarachnoid haemorrhage andhead injury appear to impair CA [7, 9, 4], which motivates the use of the ARI. Knowledgeof the accuracy of an ARI reading, which our bayesian approach supplies, strengthens thispotential. It gives a clinician a sense of the reliability in a given measurement.The current work has some limitations, most notably in the way the data were collected.For example, the cerebral blood flow velocity was approximated by measuring the flowrate in the middle cerebral artery (MCA). This approximation is only valid if the diameterof the MCA is constant, which was unverified. Another limitation was in the length of thetimes series V [ t ] and P [ t ], which was approximately two minutes. It is possible that thisrelatively short length is unrepresentative of the true behaviour of the ABP and CBFV,which would bias our results.Another problem with the time series is that they often contain various artifacts,both physiological and non-physiological. The sensitivity of the ARI to these artifacts isrelatively unknown. Recent works, however, have started to explore this problem [13, 19].The authors in [19] examined the response of the ARI to four common types of non-physiological artifact (as identified by Li et al in [13]), and found different qualitativeeffects. It would be interesting to see if our bayesian approach could corroborate theirfindings. That is, if the error estimates in the ARI in the presence of a given artifactwould reflect the effects they reported.One purpose of our work was to demonstrate the utility of bayesian statistics in bio-physics problems. We chose perhaps the simplest example of how this could be done,namely by using Bayesian parameter inference to estimate the error in ARI’s. Theremany other more exotic bayesian tools which could used, which we hope will be exploredin future work. On such contender is bayesian sequential analysis, a method which dy-namically chooses when enough data have been collected in a sampling problem. Thiscould be useful in determining, for example, how long time series of a patient’s ABP andCBFV should be. At present, it is not clear what the optimal length is, although it isknown that ABP time series with greater variance leads to more robust autoregulationindices, as described in [18]. 9 cknowledgement A.M. acknowledges the partial support of the EPSRC project EP/K036157/1. K.P.O.acknowledges support from NSF grants DMS-1513179 and CCF-1522054. They also thankthe MRC (expand). The authors thank Greg Mader, Katrina Johnson, and Erica Rutterfor helpful discussions.
References [1] R Aaslid, K-F Lindegaard, W Sorteberg, and H Nornes. Cerebral autoregulationdynamics in humans.
Stroke , 20(1):45–52, 1989.[2] Siddhartha Chib and Edward Greenberg. Understanding the metropolis-hastingsalgorithm.
The american statistician , 49(4):327–335, 1995.[3] M Czosnyka, P Smielewski, S Piechnik, L A Steiner, and JD Pickard. Cerebralautoregulation following head injury.
Journal of neurosurgery , 95(5):756–763, 2001.[4] Marek Czosnyka, Piotr Smielewski, Peter Kirkpatrick, David K Menon, and John DPickard. Monitoring of cerebral autoregulation in head-injured patients.
Stroke ,27(10):1829–1834, 1996.[5] A Dagal and AM Lam. Cerebral autoregulation and anesthesia.
Current Opinion inAnesthesiology , 22(5):547–552, 2009.[6] Cameron Davidson-Pilon.
Bayesian methods for hackers: probabilistic programmingand Bayesian inference . Addison-Wesley Professional, 2015.[7] Suzanne L Dawson, Melanie J Blake, Ronney B Panerai, and John F Potter. Dy-namic but not static cerebral autoregulation is impaired in acute ischaemic stroke.
Cerebrovascular Diseases , 10(2):126–132, 2000.[8] U Dirnagl and W Pulsinelli. Autoregulation of cerebral blood flow in experimentalfocal brain ischemia.
J Cereb Blood Flow Metab. , 10:327—336, 1990.[9] Cole A Giller. The frequency-dependent behavior of cerebral autoregulation.
Neuro-surgery , 27(3):362–368, 1990.[10] SL Harper, HG Bohlen, and MJ Ruben. Arterial and microvascular contributions tocerebral cortical autoregulation in rats.
Am J Physiol Heart Circ Physiol. , 246:H17—H24, 1979.[11] W Keith Hastings. Monte carlo sampling methods using markov chains and theirapplications.
Biometrika , 57(1):97–109, 1970.[12] NA Lassen. Cerebral blood flow and oxygen consumption in man.
Physiol Rev. ,39:183—238, 1959.[13] Qiao Li, Roger G Mark, and Gari D Clifford. Artificial arterial blood pressure arti-fact models and an evaluation of a robust blood pressure and heart rate estimator.
Biomedical engineering online , 8(1):13, 2009.1014] LA Lipsitz, S Mukai, J Hamner, M Gagnon, and V Babikian. Dynamic regulation ofmiddle cerebral artery blood flow velocity in aging and hypertension.
Stroke , 31:1897–1903, 2000.[15] ET MacKenzie, JK Farrar, W Fitch, DI Graham, PC Gregory, and AM Harper.Effects of hemorrhagic hypotension on the cerebral circulation. i. cerebral bood flowand pial arteriolar caliber.
Stroke. , 10:711—718, 1979.[16] ET MacKenzie, S Strandgaard, DI Graham, JV Jones, AM Harper, and JK Farrar.Effects of acutely induced hypertension in cats on pial arteriolar caliber, local cerebralblood flow, and the blood-brain barrier.
Circ Res. , 39:33—41, 1976.[17] Greg Mader, Mette Olufsen, and Adam Mahdi. Modeling cerebral blood flow velocityduring orthostatic stress.
Annals of biomedical engineering , 43(8):1748–1758, 2015.[18] Adam Mahdi, Dragana Nikolic, Anthony A Birch, Mette S Olufsen, Ronney B Pan-erai, David M Simpson, and Stephen J Payne. Increased blood pressure variabilityupon standing up improves reproducibility of cerebral autoregulation indices. arXivpreprint arXiv:1705.04942 , 2017.[19] Adam Mahdi, Erica M Rutter, and Stephen J Payne. Effects of non-physiologicalblood pressure artefacts on cerebral autoregulation.
Medical Engineering & Physics ,2017.[20] RB Panerai. Cerebral autoregulation: from models to clinical applications.
CardiovascEng. , 8:43–59, 2008.[21] RB Panerai, EL Sammons, SM Smith, WE Rathbone, S Bentley, JF Potter, andNJ Samani. Continuous estimates of dynamic cerebral autoregulation: influence ofnon-invasive arterial blood pressure measurements.
Physiol Meas. , 29:497–513, 2008.[22] RB Panerai, DM Simpson, ST Deverson, P Mahony, P Hayes, and DH Evans. Mul-tivariate dynamic analysis of cerebral blood flow regulation in humans.
BiomedicalEngineering, IEEE Transactions on , 47(3):419–423, 2000.[23] Christian P Robert.
Monte carlo methods . Wiley Online Library, 2004.[24] FP Tiecks, AM Lam, R Aaslid, and DW Newell. Comparison of static and dynamiccerebral autoregulation measurements.
Stroke. , 26:1014—1019, 1995.[25] W Zong, T Heldt, GB Moody, and RG Mark. An open-source algorithm to detectonset of arterial blood pressure pulses. In