Gary M. Raymond
University of Washington
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gary M. Raymond.
Pflügers Archiv: European Journal of Physiology | 2000
Andras Eke; Peter Herman; James B. Bassingthwaighte; Gary M. Raymond; Percival Db; M. Cannon; I. Balla; C. Ikrényi
Abstract. Many physiological signals appear fractal, in having self-similarity over a large range of their power spectral densities. They are analogous to one of two classes of discretely sampled pure fractal time signals, fractional Gaussian noise (fGn) or fractional Brownian motion (fBm). The fGn series are the successive differences between elements of a fBm series; they are stationary and are completely characterized by two parameters, σ2, the variance, and H, the Hurst coefficient. Such efficient characterization of physiological signals is valuable since H defines the autocorrelation and the fractal dimension of the time series. Estimation of H from Fourier analysis is inaccurate, so more robust methods are needed. Dispersional analysis (Disp) is good for noise signals while bridge detrended scaled windowed variance analysis (bdSWV) is good for motion signals. Signals whose slopes of their power spectral densities lie near the border between fGn and fBm are difficult to classify. A new method using signal summation conversion (SSC), wherein an fGn is converted to an fBm or an fBm to a summed fBm and bdSWV then applied, greatly improves the classification and the reliability of Ĥ, the estimates of H, for the times series. Applying these methods to laser-Doppler blood cell perfusion signals obtained from the brain cortex of anesthetized rats gave Ĥ of 0.24±0.02 (SD, n=8) and defined the signal as a fractional Brownian motion. The implication is that the flow signal is the summation (motion) of a set of local velocities from neighboring vessels that are negatively correlated, as if induced by local resistance fluctuations.
Physica A-statistical Mechanics and Its Applications | 1997
Michael J. Cannon; Donald B. Percival; David C. Caccia; Gary M. Raymond; James B. Bassingthwaighte
Three-scaled windowed variance methods (standard, linear regression detrended, and brdge detrended) for estimating the Hurst coefficient (H) are evaluated. The Hurst coefficient, with 0 < H < 1, characterizes self-similar decay in the time-series autocorrelation function. The scaled windowed variance methods estimate H for fractional Brownian motion (fBm) signals which are cumulative sums of fractional Gaussian noise (fGn) signals. For all three methods both the bias and standard deviation of estimates are less than 0.05 for series having N ≥ 2(9) points. Estimates for short series (N < 2(8)) are unreliable. To have a 0.95 probability of distinguishing between two signals with true H differing by 0.1, more than 2(15) points are needed. All three methods proved more reliable (based on bias and variance of estimates) than Hursts rescaled range analysis, periodogram analysis, and autocorrelation analysis, and as reliable as dispersional analysis. The latter methods can only be applied to fGn or differences of fBm, while the scaled windowed variance methods must be applied to fBm or cumulative sums of fGn.
Annals of Biomedical Engineering | 1995
James B. Bassingthwaighte; Gary M. Raymond
Fractal signals can be characterized by their fractal dimension plus some measure of their variance at a given level of resolution. The Hurst exponent,H, is <0.5 for rough anticorrelated series, >0.5 for positively correlated series, and =0.5 for random, white noise series. Several methods are available: dispersional analysis, Hurst rescaled range analysis, autocorrelation measures, and power special analysis. Short data sets are notoriously difficult to characterize; research to define the limitations of the various methods is incomplete. This numerical study of fractional Brownian noise focuses on determining the limitations of the dispersional analysis method, in particular, assessing the effects of signal length and of added noise on the estimate of the Hurst coefficient,H, (which ranges from 0 to 1 and is 2-D, whereD is the fractal dimension). There are three general conclusions: (i) pure fractal signals of length greater than 256 points give estimates ofH that are biased but have standard deviations less than 0.1; (ii) the estimates ofH tend to be biased towardH=0.5 at both highH (>0.8) and lowH (<0.5), and biases are greater for short time series than for long; and (iii) the addition of Gaussian noise (H=0.5) degrades the signals: for those with negative correlation (H<0.5) the degradation is great, the noise has only mild degrading effects on signals withH>0.6, and the method is particularly robust for signals with highH and long series, where even 100% noise added has only a few percent effect on the estimate ofH. Dispersional analysis can be regarded as a strong method for characterizing biological or natural time series, which generally show long-range positive correlation.
Annals of Biomedical Engineering | 1994
James B. Bassingthwaighte; Gary M. Raymond
Rescaled range analysis is a means of characterizing a time series or a one-dimensional (1-D) spatial signal that provides simultaneously a measure of variance and of the long-term correlation or “memory,” The trend-corrected method is based on the statistical self-similarity in the signal: in the standard approach one measures the ratioR/S on the rangeR of the sum of the deviations from the local mean divided by the standard deviationS from the mean. For fractal signalsR/S is a power law function of the length τ of each segment of the set of segments into which the data set has been divided. Over a wide range of τs the relationship is:R/S=aτM, wherek is a scalar and theH is the Hurst exponent. (For a 1-D signalf(t), the exponentH=2-D, withD being the fractal dimension.) The method has been tested extensively on fractional Brownian signals of knownH to determine its accuracy, bias, and limitations.R/S tends to give biased estimates ofH, too low forH>0.72, and too high forH<0.72. Hurst analysis without trend correction differs by finding the rangeR of accumulation of differences from the global mean over the total period of data accumulation, rather than from the mean over each τ. The trend-corrected method gives better estimates ofH on Brownian fractal signals of knownH whenH≥0.5, that is, for signals with positive correlations between neighboring elements. Rescaled range analysis has poor convergence properties, requiring about 2,000 points for 5% accuracy and 200 for 10% accuracy. Empirical corrections to the estimates ofH can be made by graphical interpolation to remove bias in the estimates. Hursts 1951 conclusion that many natural phenomena exhibit not random but correlated time series is strongly affirmed.
Physica A-statistical Mechanics and Its Applications | 1997
David C. Caccia; Donald B. Percival; Michael J. Cannon; Gary M. Raymond; James B. Bassingthwaighte
Precise reference signals are required to evaluate methods for characterizing a fractal time series. Here we use fGp (fractional Gaussian process) to generate exact fractional Gaussian noise (fGn) reference signals for one-dimensional time series. The average autocorrelation of multiple realizations of fGn converges to the theoretically expected autocorrelation. Two methods commonly used to generate fractal time series, an approximate spectral synthesis (SSM) method and the successive random addition (SRA) method, do not give the correct correlation structures and should be abandoned. Time series from fGp were used to test how well several versions of rescaled range analysis (R/S) and dispersional analysis (Disp) estimate the Hurst coefficient (0 < H < 1.0). Disp is unbiased for H < 0.9 and series length N ≥ 1024, but underestimates H when H > 0.9. R/S-detrended overestimates H for time series with H < 0.7 and underestimates H for H > 0.7. Estimates of H(Ĥ) from all versions of Disp usually have lower bias and variance than those from R/S. All versions of dispersional analysis, Disp, now tested on fGp, are better than we previously thought and are recommended for evaluating time series as long-memory processes.
Annals of Biomedical Engineering | 1996
Richard B. King; Gary M. Raymond; James B. Bassingthwaighte
It has been known for some time that regional blood flows within an organ are not uniform. Useful measures of heterogeneity of regional blood flows are the standard deviation and coefficient of variation or relative dispersion of the probability density function (PDF) of regional flows obtained from the regional concentrations of tracers that are deposited in proportion to blood flow. When a mathematical model is used to analyze dilution curves after tracer solute administration, for many solutes it is important to account for flow heterogeneity and the wide range of transit times through multiple pathways in parallel. Failure to do so leads to bias in the estimates of volumes of distribution and membrane conductances. Since in practice the number of paths used should be relatively small, the analysis is sensitive to the choice of the individual elements used to approximate the distribution of flows or transit times. Presented here is a method for modeling heterogeneous flow through an organ using a scheme that covers both the high flow and long transit time extremes of the flow distribution. With this method, numerical experiments are performed to determine the errors made in estimating parameters when flow heterogeneity is ignored, in both the absence and presence of noise. The magnitude of the errors in the estimates depends upon the system parameters, the amount of flow heterogeneity present, and whether the shape of the input function is known. In some cases, some parameters may be estimated to within 10% when heterogeneity is ignored (homogeneous model), but errors of 15–20% may result, even when the level of heterogeneity is modest. In repeated trials in the presence of 5% noise, the mean of the estimates was always closer to the true value with the heterogeneous model than when heterogeneity was ignored, but the distributions of the estimates from the homogeneous and heterogeneous models overlapped for some parameters when outflow dilution curves were analyzed. The separation between the ditributions was further reduced when tissue content curves were analyzed. It is concluded that multipath models accounting for flow heterogeneity are a vehicle for assessing the effects of flow heterogeneity under the conditions applicable to specific laboratory protocols, that efforts should be made to assess the actual level of flow heterogeneity in the organ being studied, and that the errors in parameter estimates are generally smaller when the input function is known rather than estimated by deconvolution.
Philosophical Transactions of the Royal Society A | 2006
James B. Bassingthwaighte; Gary M. Raymond; James D. Ploger; Lisa M. Schwartz; Thomas R. Bukowski
Endothelial cells lining myocardial capillaries not only impede transport of blood solutes to the contractile cells, but also take up and release substrates, competing with myocytes. Solutes permeating this barrier exhibit concentration gradients along the capillary. This paper introduces a generic model, GENTEX, to characterize blood–tissue exchanges. GENTEX is a whole organ model of the vascular network providing intraorgan flow heterogeneity and accounts for substrate transmembrane transport, binding and metabolism in erythrocytes, plasma, endothelial cells, interstitial space and cardiomyocytes. The model is tested here for the analysis of multiple tracer indicator dilution data on purine nucleoside metabolism in the isolated Krebs–Henseleit-perfused non-working hearts. It has been also used for analysing NMR contrast data for regional myocardial flows and for positron emission tomographic studies of cardiac receptor kinetics. The facilitating transporters, binding sites and enzymatic reactions are nonlinear elements and allow competition between substrates and a reaction sequence of up to five substrate–product reactions in a metabolic network. Strategies for application start with experiment designs incorporating inert reference tracers. For the estimation of endothelial and sarcolemmal permeability-surface area products and metabolism of the substrates and products, model solutions were optimized to fit the data from pairs of tracer injections (of either inosine or adenosine, plus the reference tracers) injected under the same circumstances a few minutes later. The results provide a self-consistent description of nucleoside metabolism in a beating well-perfused rabbit heart, and illustrate the power of the model to fit multiple datasets simultaneously.
Physica A-statistical Mechanics and Its Applications | 1999
Gary M. Raymond; James B. Bassingthwaighte
Methods for estimating the fractal dimension, D, or the related Hurst coefficient, H, for a one-dimensional fractal series include Hursts method of rescaled range analysis, spectral analysis, dispersional analysis, and scaled windowed variance analysis (which is related to detrended fluctuation analysis). Dispersional analysis estimates H by using the variance of the grouped means of discrete fractional Gaussian noise series (DfGn). Scaled windowed variance analysis estimates H using the mean of grouped variances of discrete fractional Brownian motion (DfBm) series. Both dispersional analysis and scaled windowed variance analysis have small bias and variance in their estimates of the Hurst coefficient. This study demonstrates that both methods derive their accuracy from their strict mathematical relationship to the expected value of the correlation function of DfGn. The expected values of the variance of the grouped means for dispersional analysis on DfGn and the mean of the grouped variance for scaled windowed variance analysis on DfBm are calculated. An improved formulation for scaled windowed variance analysis is given. The expected values using these analyses on the wrong kind of series (dispersional analysis on DfBm and scaled windowed variance analysis on DfGn) are also calculated.
F1000Research | 2014
Erik Butterworth; Bartholomew Jardine; Gary M. Raymond; Maxwell Lewis Neal; James B. Bassingthwaighte
JSim is a simulation system for developing models, designing experiments, and evaluating hypotheses on physiological and pharmacological systems through the testing of model solutions against data. It is designed for interactive, iterative manipulation of the model code, handling of multiple data sets and parameter sets, and for making comparisons among different models running simultaneously or separately. Interactive use is supported by a large collection of graphical user interfaces for model writing and compilation diagnostics, defining input functions, model runs, selection of algorithms solving ordinary and partial differential equations, run-time multidimensional graphics, parameter optimization (8 methods), sensitivity analysis, and Monte Carlo simulation for defining confidence ranges. JSim uses Mathematical Modeling Language (MML) a declarative syntax specifying algebraic and differential equations. Imperative constructs written in other languages (MATLAB, FORTRAN, C++, etc.) are accessed through procedure calls. MML syntax is simple, basically defining the parameters and variables, then writing the equations in a straightforward, easily read and understood mathematical form. This makes JSim good for teaching modeling as well as for model analysis for research. For high throughput applications, JSim can be run as a batch job. JSim can automatically translate models from the repositories for Systems Biology Markup Language (SBML) and CellML models. Stochastic modeling is supported. MML supports assigning physical units to constants and variables and automates checking dimensional balance as the first step in verification testing. Automatic unit scaling follows, e.g. seconds to minutes, if needed. The JSim Project File sets a standard for reproducible modeling analysis: it includes in one file everything for analyzing a set of experiments: the data, the models, the data fitting, and evaluation of parameter confidence ranges. JSim is open source; it and about 400 human readable open source physiological/biophysical models are available at http://www.physiome.org/jsim/.
Journal of Physics A | 1998
Hong Qian; Gary M. Raymond; James B. Bassingthwaighte
As a generalization of one-dimensional fractional Brownian motion (1dfBm), we introduce a class of two-dimensional, self-similar, strongly correlated random walks whose variance scales with power law N(2) (H) (0 < H < 1). We report analytical results on the statistical size and shape, and segment distribution of its trajectory in the limit of large N. The relevance of these results to polymer theory is discussed. We also study the basic properties of a second generalization of 1dfBm, the two-dimensional fractional Brownian random field (2dfBrf). It is shown that the product of two 1dfBms is the only 2dfBrf which satisfies the self-similarity defined by Sinai.