Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrew P. Holmes is active.

Publication


Featured researches published by Andrew P. Holmes.


Human Brain Mapping | 2002

Nonparametric permutation tests for functional neuroimaging : a primer with examples

Thomas E. Nichols; Andrew P. Holmes

Requiring only minimal assumptions for validity, nonparametric permutation testing provides a flexible and intuitive methodology for the statistical analysis of data from functional neuroimaging experiments, at some computational expense. Introduced into the functional neuroimaging literature by Holmes et al. ([ 1996 ]: J Cereb Blood Flow Metab 16:7–22), the permutation approach readily accounts for the multiple comparisons problem implicit in the standard voxel‐by‐voxel hypothesis testing framework. When the appropriate assumptions hold, the nonparametric permutation approach gives results similar to those obtained from a comparable Statistical Parametric Mapping approach using a general linear model with multiple comparisons corrections derived from random field theory. For analyses with low degrees of freedom, such as single subject PET/SPECT experiments or multi‐subject PET/SPECT or fMRI designs assessed for population effects, the nonparametric approach employing a locally pooled (smoothed) variance estimate can outperform the comparable Statistical Parametric Mapping approach. Thus, these nonparametric techniques can be used to verify the validity of less computationally expensive parametric approaches. Although the theory and relative advantages of permutation approaches have been discussed by various authors, there has been no accessible explication of the method, and no freely distributed software implementing it. Consequently, there have been few practical applications of the technique. This article, and the accompanying MATLAB software, attempts to address these issues. The standard nonparametric randomization and permutation testing ideas are developed at an accessible level, using practical examples from functional neuroimaging, and the extensions for multiple comparisons described. Three worked examples from PET and fMRI are presented, with discussion, and comparisons with standard parametric approaches made where appropriate. Practical considerations are given throughout, and relevant statistical concepts are expounded in appendices. Hum. Brain Mapping 15:1–25, 2001.


NeuroImage | 1995

Analysis of fMRI Time-Series Revisited

K. J. Friston; Andrew P. Holmes; Jean-Baptiste Poline; P. J. Grasby; Steven Williams; Richard S. J. Frackowiak; Robert Turner

This paper presents a general approach to the analysis of functional MRI time-series from one or more subjects. The approach is predicated on an extension of the general linear model that allows for correlations between error terms due to physiological noise or correlations that ensue after temporal smoothing. This extension uses the effective degrees of freedom associated with the error term. The effective degrees of freedom are a simple function of the number of scans and the temporal auto correlation function. A specific form for the latter can be assumed if the data are smoothed, in time, to accentuate hemodynamic responses with a neural basis. This assumption leads to an expedient implementation of a flexible statistical framework. The importance of this small extension is that, in contradistinction to our previous approach, any parametric statistical analysis can be implemented. We demonstrate this point using a multiple regression analysis that tests for effects of interest (activations due to word generation), while taking explicit account of some obvious confounds.


NeuroImage | 1998

Event-related fMRI: Characterizing differential responses

K. J. Friston; P. C. Fletcher; Oliver Josephs; Andrew P. Holmes; Michael D. Rugg; Robert Turner

We present an approach to characterizing the differences among event-related hemodynamic responses in functional magnetic resonance imaging that are evoked by different sorts of stimuli. This approach is predicated on a linear convolution model and standard inferential statistics as employed by statistical parametric mapping. In particular we model evoked responses, and their differences, in terms of basis functions of the peri-stimulus time. This facilitates a characterization of the temporal response profiles that has a high effective temporal resolution relative to the repetition time. To demonstrate the technique we examined differential responses to visually presented words that had been seen prior to scanning or that were novel. The form of these differences involved both the magnitude and the latency of the response components. In this paper we focus on bilateral ventrolateral prefrontal responses that show deactivations for previously seen words and activations for novel words.


NeuroImage | 1996

Detecting Activations in PET and fMRI: Levels of Inference and Power

K. J. Friston; Andrew P. Holmes; Jean-Baptiste Poline; Cathy J. Price; C. D. Frith

This paper is about detecting activations in statistical parametric maps and considers the relative sensitivity of a nested hierarchy of tests that we have framed in terms of the level of inference (voxel level, cluster level, and set level). These tests are based on the probability of obtaining c, or more, clusters with k, or more, voxels, above a threshold u. This probability has a reasonably simple form and is derived using distributional approximations from the theory of Gaussian fields. The most important contribution of this work is the notion of set-level inference. Set-level inference refers to the statistical inference that the number of clusters comprising an observed activation profile is highly unlikely to have occurred by chance. This inference pertains to the set of activations reaching criteria and represents a new way of assigning P values to distributed effects. Cluster-level inferences are a special case of set-level inferences, which obtain when the number of clusters c = 1. Similarly voxel-level inferences are special cases of cluster-level inferences that result when the cluster can be very small (i.e., k = 0). Using a theoretical power analysis of distributed activations, we observed that set-level inferences are generally more powerful than cluster-level inferences and that cluster-level inferences are generally more powerful than voxel-level inferences. The price paid for this increased sensitivity is reduced localizing power: Voxel-level tests permit individual voxels to be identified as significant, whereas cluster-and set-level inferences only allow clusters or sets of clusters to be so identified. For all levels of inference the spatial size of the underlying signal f (relative to resolution) determines the most powerful thresholds to adopt. For set-level inferences if f is large (e.g., fMRI) then the optimum extent threshold should be greater than the expected number of voxels for each cluster. If f is small (e.g., PET) the extent threshold should be small. We envisage that set-level inferences will find a role in making statistical inferences about distributed activations, particularly in fMRI.


NeuroImage | 2000

To Smooth or Not to Smooth?: Bias and Efficiency in fMRI Time-Series Analysis

K. J. Friston; Oliver Josephs; Eric Zarahn; Andrew P. Holmes; S. Rouquette; Jean-Baptiste Poline

This paper concerns temporal filtering in fMRI time-series analysis. Whitening serially correlated data is the most efficient approach to parameter estimation. However, if there is a discrepancy between the assumed and the actual correlations, whitening can render the analysis exquisitely sensitive to bias when estimating the standard error of the ensuing parameter estimates. This bias, although not expressed in terms of the estimated responses, has profound effects on any statistic used for inference. The special constraints of fMRI analysis ensure that there will always be a misspecification of the assumed serial correlations. One resolution of this problem is to filter the data to minimize bias, while maintaining a reasonable degree of efficiency. In this paper we present expressions for efficiency (of parameter estimation) and bias (in estimating standard error) in terms of assumed and actual correlation structures in the context of the general linear model. We show that: (i) Whitening strategies can result in profound bias and are therefore probably precluded in parametric fMRI data analyses. (ii) Band-pass filtering, and implicitly smoothing, has an important role in protecting against inferential bias.


NeuroImage | 1998

Characterizing Stimulus-Response Functions Using Nonlinear Regressors in Parametric fMRI Experiments

C. Büchel; Andrew P. Holmes; Geraint Rees; K. J. Friston

Parametric study designs proved very useful in characterizing the relationship between experimental parameters (e.g., word presentation rate) and regional cerebral blood flow in positron emission tomography studies. In a previous paper we presented a method that fits nonlinear functions of stimulus or task parameters to hemodynamic responses, using second-order polynomial expansions. Here we expand this approach to model nonlinear relationships between BOLD responses and experimental parameters, using fMRI. We present a framework that allows this technique to be implemented in the context of the general linear model employed by statistical parametric mapping (SPM). Statistical inferences, in this instance, are based on F statistics and in this respect we emphasize the use of corrected P values for F fields (i.e., SPM¿F¿). The approach is illustrated with a fMRI study that looked at the effect of increasing auditory word-presentation rate. Our parametric design allowed us to characterize different forms of rate-dependent responses in three critical regions: (i) bilateral frontal regions showed a categorical response to the presence of words irrespective of rate, suggesting a role for this region in establishing cognitive (e.g., attentional) set; (ii) in bilateral occipitotemporal regions activations increased linearly with increasing word rate; and (iii) posterior auditory association cortex exhibited a nonlinear (inverted U) relationship to word rate.


Nature | 1997

How the brain learns to see objects and faces in an impoverished context

R. J. Dolan; G. R. Fink; Edmund T. Rolls; M. Booth; Andrew P. Holmes; R. S. J. Frackowiak; K. J. Friston

A degraded image of an object or face, which appearsmeaningless when seen for the first time, is easily recognizableafter viewing an undegraded version of the same image. The neural mechanisms by whichthis form of rapid perceptual learning facilitates perception are notwell understood. Psychological theory suggests the involvementof systems for processing stimulus attributes, spatial attentionand feature binding,as well as those involved in visual imagery. Here we investigate where andhow this rapid perceptual learning is expressed in the human brain byusing functional neuroimaging to measure brain activity duringexposure to degraded images before and after exposure to thecorresponding undegraded versions (Fig. 1). Perceptuallearning of faces or objects enhanced the activity of inferiortemporal regions known to be involved in face and object recognitionrespectively. In addition, both faceand object learning led to increased activity in medial and lateralparietal regions that have been implicated in attention and visual imagery. We observed a strong couplingbetween the temporal face area and the medial parietal cortexwhen, and only when, faces were perceived. Thissuggests that perceptual learning involves direct interactions betweenareas involved in face recognition and those involved in spatialattention, feature binding and memoryrecall.


NeuroImage | 2002

Variability in fMRI: An examination of intersession differences

David McGonigle; A. M. Howseman; B. S. Athwal; K. J. Friston; Richard S. J. Frackowiak; Andrew P. Holmes

The results from a single functional magnetic resonance imaging session are typically reported as indicative of the subjects functional neuroanatomy. Underlying this interpretation is the implicit assumption that there are no responses specific to that particular session, i.e., that the potential variability of response between sessions is negligible. The present study sought to examine this assumption empirically. A total of 99 sessions, comprising 33 repeats of simple motor, visual, and cognitive paradigms, were collected over a period of 2 months on a single male subject. For each paradigm, the inclusion of session-by-condition interactions explained a significant amount of error variance (P < 0.05 corrected for multiple comparisons) over a model assuming a common activation magnitude across all sessions. However, many of those voxels displaying significant session-by-condition interactions were not seen in a multisession fixed-effects analysis of the same data set; i.e., they were not activated on average across all sessions. Most voxels that were both significantly variable and activated on average across all sessions did not survive a random-effects analysis (modeling between-session variance). We interpret our results as demonstrating that correct inference about subject responses to activation tasks can be derived through the use of a statistical model which accounts for both within- and between-session variance, combined with an appropriately large session sample size. If researchers have access to only a single session from a single subject, erroneous conclusions are a possibility, in that responses specific to this single session may be claimed to be typical responses for this subject.


NeuroImage | 1999

Robust smoothness estimation in statistical parametric maps using standardized residuals from the general linear model

Stefan J. Kiebel; Jean-Baptiste Poline; K. J. Friston; Andrew P. Holmes; Keith J. Worsley

The assessment of significant activations in functional imaging using voxel-based methods often relies on results derived from the theory of Gaussian random fields. These results solve the multiple comparison problem and assume that the spatial correlation or smoothness of the data is known or can be estimated. End results (i. e., P values associated with local maxima, clusters, or sets of clusters) critically depend on this assessment, which should be as exact and as reliable as possible. In some earlier implementations of statistical parametric mapping (SPM) (SPM94, SPM95) the smoothness was assessed on Gaussianized t-fields (Gt-f) that are not generally free of physiological signal. This technique has two limitations. First, the estimation is not stable (the variance of the estimator being far from negligible) and, second, physiological signal in the Gt-f will bias the estimation. In this paper, we describe an estimation method that overcomes these drawbacks. The new approach involves estimating the smoothness of standardized residual fields which approximates the smoothness of the component fields of the associated t-field. Knowing the smoothness of these component fields is important because it allows one to compute corrected P values for statistical fields other than the t-field or the Gt-f (e.g., the F-map) and eschews bias due to deviation from the null hypothesis. We validate the method on simulated data and demonstrate it using data from a functional MRI study.


Human Brain Mapping | 1996

A multivariate analysis of PET activation studies

K. J. Friston; Jean-Baptiste Poline; Andrew P. Holmes; Chris Frith; Richard S. J. Frackowiak

In this paper we present a general multivariate approach to the analysis of functional imaging studies. This analysis uses standard multivariate techniques to make statistical inferences about activation effects and to describe the important features of these effects. More specifically, the proposed analysis uses multivariate analysis of covariance (ManCova) with Wilks lambda to test for specific effects of interest (e.g., differences among activation conditions), and canonical variates analysis (CVA) to characterize differential responses in terms of distributed brain systems. The data are subject to ManCova after transformation using their principal components or eigenimages. After significance of the activation effect has been assessed, underlying changes are described in terms of canonical images. Canonical images are like eigenimages but take explicit account of the effects of error or noise. The generality of this approach is assured by the general linear model used in the ManCova. The design and inferences sought are embodied in the design matrix and can, in principle, accommodate most parametric statistical analyses. This multivariate analysis may provide a statistical approach to PET activation studies that 1) complements univariate approaches like statistical parametric mapping, and 2) may facilitate the extension of existing multivariate techniques, like the scaled subprofile model and eigenimage analysis, to include hypothesis testing and statistical inference.

Collaboration


Dive into the Andrew P. Holmes's collaboration.

Top Co-Authors

Avatar

K. J. Friston

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Richard S. J. Frackowiak

Wellcome Trust Centre for Neuroimaging

View shared research outputs
Top Co-Authors

Avatar

Chris Frith

Wellcome Trust Centre for Neuroimaging

View shared research outputs
Top Co-Authors

Avatar

Oliver Josephs

Wellcome Trust Centre for Neuroimaging

View shared research outputs
Top Co-Authors

Avatar

David Silbersweig

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge