Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where William D. Penny is active.

Publication


Featured researches published by William D. Penny.


NeuroImage | 2003

Dynamic causal modelling.

K. J. Friston; Lee M. Harrison; William D. Penny

In this paper we present an approach to the identification of nonlinear input-state-output systems. By using a bilinear approximation to the dynamics of interactions among states, the parameters of the implicit causal model reduce to three sets. These comprise (1) parameters that mediate the influence of extrinsic inputs on the states, (2) parameters that mediate intrinsic coupling among the states, and (3) [bilinear] parameters that allow the inputs to modulate that coupling. Identification proceeds in a Bayesian framework given known, deterministic inputs and the observed responses of the system. We developed this approach for the analysis of effective connectivity using experimentally designed inputs and fMRI responses. In this context, the coupling parameters correspond to effective connectivity and the bilinear parameters reflect the changes in connectivity induced by inputs. The ensuing framework allows one to characterise fMRI experiments, conceptually, as an experimental manipulation of integration among brain regions (by contextual or trial-free inputs, like time or attentional set) that is revealed using evoked responses (to perturbations or trial-bound inputs, like stimuli). As with previous analyses of effective connectivity, the focus is on experimentally induced changes in coupling (cf., psychophysiologic interactions). However, unlike previous approaches in neuroimaging, the causal model ascribes responses to designed deterministic inputs, as opposed to treating inputs as unknown and stochastic.


NeuroImage | 2009

Bayesian model selection for group studies

Klaas E. Stephan; William D. Penny; Jean Daunizeau; Rosalyn J. Moran; K. J. Friston

Bayesian model selection (BMS) is a powerful method for determining the most likely among a set of competing hypotheses about the mechanisms that generated observed data. BMS has recently found widespread application in neuroimaging, particularly in the context of dynamic causal modelling (DCM). However, so far, combining BMS results from several subjects has relied on simple (fixed effects) metrics, e.g. the group Bayes factor (GBF), that do not account for group heterogeneity or outliers. In this paper, we compare the GBF with two random effects methods for BMS at the between-subject or group level. These methods provide inference on model-space using a classical and Bayesian perspective respectively. First, a classical (frequentist) approach uses the log model evidence as a subject-specific summary statistic. This enables one to use analysis of variance to test for differences in log-evidences over models, relative to inter-subject differences. We then consider the same problem in Bayesian terms and describe a novel hierarchical model, which is optimised to furnish a probability density on the models themselves. This new variational Bayes method rests on treating the model as a random variable and estimating the parameters of a Dirichlet distribution which describes the probabilities for all models considered. These probabilities then define a multinomial distribution over model space, allowing one to compute how likely it is that a specific model generated the data of a randomly chosen subject as well as the exceedance probability of one model being more likely than any other model. Using empirical and synthetic data, we show that optimising a conditional density of the model probabilities, given the log-evidences for each model over subjects, is more informative and appropriate than both the GBF and frequentist tests of the log-evidences. In particular, we found that the hierarchical Bayesian approach is considerably more robust than either of the other approaches in the presence of outliers. We expect that this new random effects method will prove useful for a wide range of group studies, not only in the context of DCM, but also for other modelling endeavours, e.g. comparing different source reconstruction methods for EEG/MEG or selecting among competing computational models of learning and decision-making.


NeuroImage | 2004

Comparing dynamic causal models

William D. Penny; Klaas E. Stephan; Andrea Mechelli; K. J. Friston

This article describes the use of Bayes factors for comparing dynamic causal models (DCMs). DCMs are used to make inferences about effective connectivity from functional magnetic resonance imaging (fMRI) data. These inferences, however, are contingent upon assumptions about model structure, that is, the connectivity pattern between the regions included in the model. Given the current lack of detailed knowledge on anatomical connectivity in the human brain, there are often considerable degrees of freedom when defining the connectional structure of DCMs. In addition, many plausible scientific hypotheses may exist about which connections are changed by experimental manipulation, and a formal procedure for directly comparing these competing hypotheses is highly desirable. In this article, we show how Bayes factors can be used to guide choices about model structure, both concerning the intrinsic connectivity pattern and the contextual modulation of individual connections. The combined use of Bayes factors and DCM thus allows one to evaluate competing scientific theories about the architecture of large-scale neural networks and the neuronal interactions that mediate perception and cognition.


In: Friston, KJ and Ashburner, JT and Kiebel, SJ and Nichols, TE and Penny, WD, (eds.) Statistical parametric mapping: the analysis of functional brain images. (pp. 10-31). Elsevier: Amsterdam. (2007) | 2007

Statistical Parametric Mapping

K. J. Friston; John Ashburner; Stefan J. Kiebel; Thomas E. Nichols; William D. Penny

1. INTRODUCTION This chapter is about making regionally specific inferences in neuroimaging. These inferences may be about differences expressed when comparing one group of subjects to another or, within subjects, over a sequence of observations. They may pertain to structural differences (e.g. in voxel-based morphometry-Ashburner and Friston 2000) or neurophysiological indices of brain functions (e.g. fMRI). The principles of data analysis are very similar for all of these applications and constitute the subject of this chapter. We will focus on the analysis of fMRI time-series because this covers most of the issues that are likely to be encountered in other modalities. Generally, 2 Chapter #1 the analysis of structural images and PET scans is simpler because they do not have to deal with correlated errors, from one scan to the next. A general issue, in data analysis, is the relationship between the neurobiological hypothesis one posits and the statistical models adopted to test that hypothesis. This chapter begins by reviewing the distinction between functional specialization and integration and how these principles serve as the motivation for most analyses of neuroimaging data. We will address the design and analysis of neuroimaging studies from both these perspectives but note that both have to be integrated for a full understanding of brain mapping results. Statistical parametric mapping is generally used to identify functionally specialized brain regions and is the most prevalent approach to characterizing functional anatomy and disease-related changes. The alternative perspective, namely that provided by functional integration, requires a different set of [multivariate] approaches that examine the relationship between changes in activity in one brain area and another. Statistical parametric mapping is a voxel-based approach, employing classical inference, to make some comment about regionally specific responses to experimental factors. In order to assign an observed response to a particular brain structure, or cortical area, the data must conform to a known anatomical space. Before considering statistical modeling, this chapter deals briefly with how a time-series of images are realigned and mapped into some standard anatomical space (e.g. a stereotactic space). The general ideas behind statistical parametric mapping are then described and illustrated with attention to the different sorts of inferences that can be #1. Statistical Parametric Mapping 3 made with different experimental designs. fMRI is special, in the sense that the data lend themselves to a signal processing perspective. This can be exploited to ensure that both the design and analysis are …


NeuroImage | 2002

Classical and Bayesian inference in neuroimaging : Theory

K. J. Friston; William D. Penny; Christophe Phillips; Stefan J. Kiebel; Geoffrey E. Hinton; John Ashburner

This paper reviews hierarchical observation models, used in functional neuroimaging, in a Bayesian light. It emphasizes the common ground shared by classical and Bayesian methods to show that conventional analyses of neuroimaging data can be usefully extended within an empirical Bayesian framework. In particular we formulate the procedures used in conventional data analysis in terms of hierarchical linear models and establish a connection between classical inference and parametric empirical Bayes (PEB) through covariance component estimation. This estimation is based on an expectation maximization or EM algorithm. The key point is that hierarchical models not only provide for appropriate inference at the highest level but that one can revisit lower levels suitably equipped to make Bayesian inferences. Bayesian inferences eschew many of the difficulties encountered with classical inference and characterize brain responses in a way that is more directly predicated on what one is interested in. The motivation for Bayesian approaches is reviewed and the theoretical background is presented in a way that relates to conventional methods, in particular restricted maximum likelihood (ReML). This paper is a technical and theoretical prelude to subsequent papers that deal with applications of the theory to a range of important issues in neuroimaging. These issues include; (i) Estimating nonsphericity or variance components in fMRI time-series that can arise from serial correlations within subject, or are induced by multisubject (i.e., hierarchical) studies. (ii) Spatiotemporal Bayesian models for imaging data, in which voxels-specific effects are constrained by responses in other voxels. (iii) Bayesian estimation of nonlinear models of hemodynamic responses and (iv) principled ways of mixing structural and functional priors in EEG source reconstruction. Although diverse, all these estimation problems are accommodated by the PEB framework described in this paper.


NeuroImage | 2010

Ten simple rules for dynamic causal modeling.

Klaas E. Stephan; William D. Penny; Rosalyn J. Moran; H.E.M. den Ouden; Jean Daunizeau; K. J. Friston

Dynamic causal modeling (DCM) is a generic Bayesian framework for inferring hidden neuronal states from measurements of brain activity. It provides posterior estimates of neurobiologically interpretable quantities such as the effective strength of synaptic connections among neuronal populations and their context-dependent modulation. DCM is increasingly used in the analysis of a wide range of neuroimaging and electrophysiological data. Given the relative complexity of DCM, compared to conventional analysis techniques, a good knowledge of its theoretical foundations is needed to avoid pitfalls in its application and interpretation of results. By providing good practice recommendations for DCM, in the form of ten simple rules, we hope that this article serves as a helpful tutorial for the growing community of DCM users.


PLOS Computational Biology | 2010

Comparing families of dynamic causal models

William D. Penny; Klaas E. Stephan; Jean Daunizeau; Maria Joao Rosa; K. J. Friston; Thomas M. Schofield; Alexander P. Leff

Mathematical models of scientific data can be formally compared using Bayesian model evidence. Previous applications in the biological sciences have mainly focussed on model selection in which one first selects the model with the highest evidence and then makes inferences based on the parameters of that model. This “best model” approach is very useful but can become brittle if there are a large number of models to compare, and if different subjects use different models. To overcome this shortcoming we propose the combination of two further approaches: (i) family level inference and (ii) Bayesian model averaging within families. Family level inference removes uncertainty about aspects of model structure other than the characteristic of interest. For example: What are the inputs to the system? Is processing serial or parallel? Is it linear or nonlinear? Is it mediated by a single, crucial connection? We apply Bayesian model averaging within families to provide inferences about parameters that are independent of further assumptions about model structure. We illustrate the methods using Dynamic Causal Models of brain imaging data.


NeuroImage | 2007

Variational free energy and the Laplace approximation

K. J. Friston; Jérémie Mattout; Nelson J. Trujillo-Barreto; John Ashburner; William D. Penny

This note derives the variational free energy under the Laplace approximation, with a focus on accounting for additional model complexity induced by increasing the number of model parameters. This is relevant when using the free energy as an approximation to the log-evidence in Bayesian model averaging and selection. By setting restricted maximum likelihood (ReML) in the larger context of variational learning and expectation maximisation (EM), we show how the ReML objective function can be adjusted to provide an approximation to the log-evidence for a particular model. This means ReML can be used for model selection, specifically to select or compare models with different covariance components. This is useful in the context of hierarchical models because it enables a principled selection of priors that, under simple hyperpriors, can be used for automatic model selection and relevance determination (ARD). Deriving the ReML objective function, from basic variational principles, discloses the simple relationships among Variational Bayes, EM and ReML. Furthermore, we show that EM is formally identical to a full variational treatment when the precisions are linear in the hyperparameters. Finally, we also consider, briefly, dynamic models and how these inform the regularisation of free energy ascent schemes, like EM and ReML.


Computational Intelligence and Neuroscience | 2011

EEG and MEG Data Analysis in SPM8

Vladimir Litvak; Jérémie Mattout; Stefan J. Kiebel; Christophe Phillips; Richard N. Henson; James M. Kilner; Gareth R. Barnes; Robert Oostenveld; Jean Daunizeau; Guillaume Flandin; William D. Penny; K. J. Friston

SPM is a free and open source software written in MATLAB (The MathWorks, Inc.). In addition to standard M/EEG preprocessing, we presently offer three main analysis tools: (i) statistical analysis of scalp-maps, time-frequency images, and volumetric 3D source reconstruction images based on the general linear model, with correction for multiple comparisons using random field theory; (ii) Bayesian M/EEG source reconstruction, including support for group studies, simultaneous EEG and MEG, and fMRI priors; (iii) dynamic causal modelling (DCM), an approach combining neural modelling with data analysis for which there are several variants dealing with evoked responses, steady state responses (power spectra and cross-spectra), induced responses, and phase coupling. SPM8 is integrated with the FieldTrip toolbox , making it possible for users to combine a variety of standard analysis methods with new schemes implemented in SPM and build custom analysis tools using powerful graphical user interface (GUI) and batching tools.


NeuroImage | 2003

Multivariate autoregressive modeling of fMRI time series

Lee M. Harrison; William D. Penny; K. J. Friston

We propose the use of multivariate autoregressive (MAR) models of functional magnetic resonance imaging time series to make inferences about functional integration within the human brain. The method is demonstrated with synthetic and real data showing how such models are able to characterize interregional dependence. We extend linear MAR models to accommodate nonlinear interactions to model top-down modulatory processes with bilinear terms. MAR models are time series models and thereby model temporal order within measured brain activity. A further benefit of the MAR approach is that connectivity maps may contain loops, yet exact inference can proceed within a linear framework. Model order selection and parameter estimation are implemented by using Bayesian methods.

Collaboration


Dive into the William D. Penny's collaboration.

Top Co-Authors

Avatar

K. J. Friston

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gareth R. Barnes

Wellcome Trust Centre for Neuroimaging

View shared research outputs
Top Co-Authors

Avatar

John Ashburner

Wellcome Trust Centre for Neuroimaging

View shared research outputs
Top Co-Authors

Avatar

Emrah Düzel

German Center for Neurodegenerative Diseases

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lee M. Harrison

Wellcome Trust Centre for Neuroimaging

View shared research outputs
Top Co-Authors

Avatar

Cathy J. Price

Wellcome Trust Centre for Neuroimaging

View shared research outputs
Top Co-Authors

Avatar

Guillaume Flandin

French Institute for Research in Computer Science and Automation

View shared research outputs
Researchain Logo
Decentralizing Knowledge