Sherri Gold
University of Iowa
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sherri Gold.
Human Brain Mapping | 1998
Sherri Gold; Brad Christian; Stephan Arndt; Gene Zeien; Ted Cizadlo; Debra L. Johnson; Michael Flaum; Nancy C. Andreasen
Currently, there are many choices of software packages for the analysis of fMRI data, each offering many options. Since no one package can meet the needs of all fMRI laboratories, it is helpful to know what each package offers. Several software programs were evaluated for comparison of their documentation, ease of learning and use, referencing, data input steps required, types of statistical methods offered, and output choices. The functionality of each package was detailed and discussed. AFNI 2.01, SPM96, Stimulate 5.0, MEDIMAX 2.01, and FIT were tested. FIASCO, Yale, and MEDx 2.0 were described but not tested. A description of each package is provided. Hum. Brain Mapping 6:73–84, 1998.
Journal of Cerebral Blood Flow and Metabolism | 1996
Stephan Arndt; Ted Cizadlo; Nancy C. Andreasen; Dan Heckel; Sherri Gold; Daniel S. O'Leary
Tests comparing image sets can play a critical role in PET research, providing a yes-no answer to the question “Are two image sets different?” The statistical goal is to determine how often observed differences would occur by chance alone. We examined randomization methods to provide several omnibus test for PET images and compared these tests with two currently used methods. In the first series of analyses, normally distributed image data were simulated fulfilling the requirements of standard statistical tests. These analyses generated power estimates and compared the various test statistics under optimal conditions. Varying whether the standard deviations were local or pooled estimates provided an assessment of a distinguishing feature between the SPM and Montreal methods. In a second series of analyses, we more closely simulated current PET acquisition and analysis techniques. Finally, PET images from normal subjects were used as an example of randomization. Randomization proved to be a highly flexible and powerful statistical procedure. Furthermore, the randomization test does not require extensive and unrealistic statistical assumptions made by standard procedures currently in use.
NeuroImage | 1996
Stephan Arndt; Ted Cizadlo; Daniel S. O'Leary; Sherri Gold; Nancy C. Andreasen
Image intensity normalization is frequently applied to eliminate or adjust for subject or injection global blood flow (gCBF) and other sources of nuisance variation. Normalization has several other positive effects on the analysis of PET images. However, the choice of an intensity normalization technique affects the statistical and psychometric properties of the image data. We compared three normalization procedures, the ratio approach (regional (r)CBF/gCBF), histogram equalization, and ANCOVA, on both PET count and flow data sets. The ratio method presents the proportional increase of regions, the histogram equalization method offers the relative ranking of intensities over the image, and the ANCOVA method provides statistical deviations from an expected linear model of regional values from the subjects gCBF. The original study used 33 normal subjects in a standard subtraction paradigm. The normalization methods were evaluated on their ability to remove extraneous error variation, induce homogeneity of intersubject variation, and remove unwanted dependencies. In general, the normalization modified the subtraction image more than the individual condition images. All three methods worked well at removing the dependency of rCBF on gCBF in count and flow images. For count data, the three methods also reduced the amount of error variation equally well, improving the signal to noise ratio. For flow data, the histogram equalization and ratio methods worked best at reducing statistical error. All three methods dramatically stabilized the variance over the image.
NeuroImage | 2000
Vincent A. Magnotta; Sherri Gold; Nancy C. Andreasen; James C. Ehrhardt; William T.C. Yuh
There is a significant amount of interest in studying the thalamus because of its central location in the brain and its role as a gatekeeper to higher centers of cognition. Imaging and measuring of the individual subnuclei of the thalamus has proven extremely difficult in MR because of the contrast-to-noise (CNR) of the MR sequences used. This report describes a novel MR pulse sequence known as cortex attenuated inversion recovery (CAIR), which increases the CNR in images and allows the individual subnuclei of the thalamus to be visualized by selectively nulling the gray matter in the brain using an inversion recovery sequence with an inversion time of 700 ms at 1.5 T.
International Journal of Psychophysiology | 1996
James E. Arruda; Michael D. Weiler; Dominic Valentino; W. Grant Willis; Joseph S. Rossi; Robert A. Stern; Sherri Gold; Laura Costa
Principal-components analysis (PCA) has been used in quantitative electroencephalogram (qEEG) research to statistically reduce the dimensionality of the original qEEG measures to a smaller set of theoretically meaningful component variables. However, PCAs involving qEEG have frequently been performed with small sample sizes, producing solutions that are highly unstable. Moreover, solutions have not been independently confirmed using an independent sample and the more rigorous confirmatory factor analysis (CFA) procedure. This paper was intended to illustrate, by way of example, the process of applying PCA and CFA to qEEG data. Explicit decision rules pertaining to the application of PCA and CFA to qEEG are discussed. In the first of two experiments, PCAs were performed on qEEG measures collected from 102 healthy individuals as they performed an auditory continuous performance task. Component solutions were then validated in an independent sample of 106 healthy individuals using the CFA procedure. The results of this experiment confirmed the validity of an oblique, seven component solution. Measures of internal consistency and test-retest reliability for the seven component solution were high. These results support the use of qEEG data as a stable and valid measure of neurophysiological functioning. As measures of these neurophysiological processes are easily derived, they may prove useful in discriminating between and among clinical (neurological) and control populations. Future research directions are highlighted.
NeuroImage | 1997
Sherri Gold; Stephan Arndt; Debra L. Johnson; Daniel S. O'Leary; Nancy C. Andreasen
The PET literature is growing exponentially, creating a need and an opportunity to perform a meta-analytic review consolidating the published information. This study describes the use of effect size as an index in PET studies and discusses how this measure can be used for comparing findings across studies, laboratories, and paradigms. In comparing studies across laboratories it is essential to know how the methods employed affect the results and conclusions drawn. This study also compared effect size for two different methods of tracer delivery in 15O PET studies ([15O]H2O bolus injection versus inhalation of [15O]CO2), whether averaged versus single-scan conditions were used, and the data analytic strategy employed. The effect sizes observed across studies were consistently large with a median effect size of 8.55, indicating that the phenomena investigated in 15O PET studies are strong. The largest peak activation reported in a study was found to be affected by variability in sample size, data analytic strategy, and repeat versus single-scan conditions. However, the impact of these factors was not examined on smaller or less intense peaks. Minimal standards for reporting statistical results are discussed.
Psychiatry Research-neuroimaging | 1997
Stephan Arndt; Sherri Gold; Ted Cizadlo; Jie Zheng; James C. Ehrhardt; Michael Flaum
Determining meaningful activation thresholds in functional magnetic resonance imaging (fMRI) paradigms is complicated by several factors. These include the time-series nature of the data, the influence of physiological rhythms (e.g. respiration) and vacillations introduced by the experimental design (e.g. cueing). We present an empirical threshold for each subject and each fMRI experiment that takes these factors into account. The method requires an additional fMRI data set as similar to the experimental paradigm as possible without dichotomously varying the experimental task of interest. A letter fluency task was used to illustrate this method. This technique differs from classical methods since the Pearson correlation probability values tabulated from statistical theory are not used. Rather each subject defines his or her own set of threshold probability values for correlations. It is against these empirical thresholds, not Pearsons, that an experimental fMRI correlation is assessed.
American Journal of Psychiatry | 1999
Sherri Gold; Stephan Arndt; Peg Nopoulos; Daniel S. O'Leary; Nancy C. Andreasen
American Journal of Psychiatry | 1999
Debra L. Johnson; John S. Wiebe; Sherri Gold; Nancy C. Andreasen; Richard D. Hichwa; G. Leonard Watkins; Laura L. Boles Ponto
Schizophrenia Research | 1997
Sherri Gold; Stephan Arndt; D.M. Mosnik; Peg Nopoulos; Daniel S. O’Leary; Nancy C. Andreasen