Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Merlin Keller is active.

Publication


Featured researches published by Merlin Keller.


NeuroImage | 2007

Mixed-effect statistics for group analysis in fMRI: A nonparametric maximum likelihood approach

Alexis Roche; Sébastien Mériaux; Merlin Keller; Bertrand Thirion

This technical note describes a collection of test statistics accounting for estimation uncertainties at the within-subject level, that can be used as alternatives to the standard t statistic in one-sample random-effect analyses, i.e. when testing the mean effect of a population. We build such test statistics by estimating the across-subject distribution of the effects using maximum likelihood under a nonparametric mixed-effect model. For inference purposes, the statistics are calibrated using permutation tests to achieve exact false positive control under a symmetry assumption regarding the across-subject distribution. The new tests are implemented in a freely available toolbox for SPM called Distance.


Reliability Engineering & System Safety | 2012

Estimation of a quantity of interest in uncertainty analysis: Some help from Bayesian decision theory

Alberto Pasanisi; Merlin Keller; Eric Parent

In the context of risk analysis under uncertainty, we focus here on the problem of estimating a so-called quantity of interest of an uncertainty analysis problem, i.e. a given feature of the probability distribution function (pdf) of the output of a deterministic model with uncertain inputs. We will stay here in a fully probabilistic setting. A common problem is how to account for epistemic uncertainty tainting the parameter of the probability distribution of the inputs. In the standard practice, this uncertainty is often neglected (plug-in approach). When a specific uncertainty assessment is made, under the basis of the available information (expertise and/or data), a common solution consists in marginalizing the joint distribution of both observable inputs and parameters of the probabilistic model (i.e. computing the predictive pdf of the inputs), then propagating it through the deterministic model. We will reinterpret this approach in the light of Bayesian decision theory, and will put into evidence that this practice leads the analyst to adopt implicitly a specific loss function which may be inappropriate for the problem under investigation, and suboptimal from a decisional perspective. These concepts are illustrated on a simple numerical example, concerning a case of flood risk assessment.


information processing in medical imaging | 2007

High level group analysis of FMRI data based on dirichlet process mixture models

Bertrand Thirion; Alan Tucholka; Merlin Keller; Philippe Pinel; Alexis Roche; Jean-François Mangin; Jean-Baptiste Poline

Inferring the position of functionally active regions from a multi-subject fMRI dataset involves the comparison of the individual data and the inference of a common activity model. While voxel-based analyzes, e.g. Random Effect statistics, are widely used, they do not model each individual activation pattern. Here, we develop a new procedure that extracts structures individually and compares them at the group level. For inference about spatial locations of interest, a Dirichlet Process Mixture Model is used. Finally, inter-subject correspondences are computed with Bayesian Network models. We show the power of the technique on both simulated and real datasets and compare it with standard inference techniques.


international symposium on biomedical imaging | 2010

ICA-based sparse features recovery from fMRI datasets

Gaël Varoquaux; Merlin Keller; Jean Baptiste Poline; Philippe Ciuciu; Bertrand Thirion

Spatial Independent Components Analysis (ICA) is increasingly used in the context of functional Magnetic Resonance Imaging (fMRI) to study cognition and brain pathologies. Salient features present in some of the extracted Independent Components (ICs) can be interpreted as brain networks, but the segmentation of the corresponding regions from ICs is still ill-controlled. Here we propose a new ICA-based procedure for extraction of sparse features from fMRI datasets. Specifically, we introduce a new thresholding procedure that controls the deviation from isotropy in the ICA mixing model. Unlike current heuristics, our procedure guarantees an exact, possibly conservative, level of specificity in feature detection. We evaluate the sensitivity and specificity of the method on synthetic and fMRI data and show that it outperforms state-of-the-art approaches.


arXiv: Computation | 2018

Adaptive Numerical Designs for the Calibration of Computer Codes

Guillaume Damblin; Pierre Barbillon; Merlin Keller; Alberto Pasanisi; Eric Parent

Making good predictions of a physical system using a computer code requires the inputs to be carefully specified. Some of these inputs, called control variables, reproduce physical conditions, whereas other inputs, called parameters, are specific to the computer code and most often uncertain. The goal of statistical calibration consists in reducing their uncertainty with the help of a statistical model which links the code outputs with the field measurements. In a Bayesian setting, the posterior distribution of these parameters is typically sampled using Markov Chain Monte Carlo methods. However, they are impractical when the code runs are highly time-consuming. A way to circumvent this issue consists of replacing the computer code with a Gaussian process emulator, then sampling a surrogate posterior distribution based on it. Doing so, calibration is subject to an error which strongly depends on the numerical design of experiments used to fit the emulator. Under the assumption that there is no code discre...


Quality and Reliability Engineering International | 2016

Bayesian Model Selection for the Validation of Computer Codes

Guillaume Damblin; Merlin Keller; Pierre Barbillon; Alberto Pasanisi; Eric Parent

Complex physical systems are increasingly modeled by computer codes which aim at predicting the reality as accurately as possible. During the last decade, code validation has benefited from a large interest within the scientific community because of the requirement to assess the uncertainty affecting the code outputs. Inspiring frompast contributions to this task, a testing procedure is proposed in this paper to decide either a pure code prediction or a discrepancy-corrected one should be used to provide the best approximation of the physical system. In a particular case where the computer code depends on uncertain parameters, this problem of model selection can be carried out in a Bayesian setting. It requires the specification of proper prior distributions that are well known as having a strong impact on the results. Another way consists in specifying non-informative priors. However, they are sometimes improper, which is a major barrier for computing the Bayes factor. A way to overcome this issue is to use the so-called intrinsic Bayes factor (IBF) in order to replace the ill-defined Bayes factor when improper priors are used. For computer codes which depend linearly on their parameters, the computation of the IBF is made easier, thanks to some explicit marginalization. In the paper, we present a special case where the IBF is equal to the standard Bayes factor when the right-Haar prior is specified on the code parameters and the scale of the code discrepancy. On simulated data, the IBF has been computed for several prior distributions. A confounding effect between the code discrepancy and the linear code is pointed out. Finally, the IBF is computed for an industrial computer code used for monitoring power plant production.


15. Annual Conference of European Network for Business and Industrial Statistics (ENBIS-15) | 2015

Bayesian model selection for the validation of computer codes

Guillaume Damblin; Merlin Keller; Pierre Barbillon; Alberto Pasanisi; Éric Parent

Complex physical systems are increasingly modeled by computer codes which aim at predicting the reality as accurately as possible. During the last decade, code validation has benefited from a large interest within the scientific community because of the requirement to assess the uncertainty affecting the code outputs. Inspiring frompast contributions to this task, a testing procedure is proposed in this paper to decide either a pure code prediction or a discrepancy-corrected one should be used to provide the best approximation of the physical system. In a particular case where the computer code depends on uncertain parameters, this problem of model selection can be carried out in a Bayesian setting. It requires the specification of proper prior distributions that are well known as having a strong impact on the results. Another way consists in specifying non-informative priors. However, they are sometimes improper, which is a major barrier for computing the Bayes factor. A way to overcome this issue is to use the so-called intrinsic Bayes factor (IBF) in order to replace the ill-defined Bayes factor when improper priors are used. For computer codes which depend linearly on their parameters, the computation of the IBF is made easier, thanks to some explicit marginalization. In the paper, we present a special case where the IBF is equal to the standard Bayes factor when the right-Haar prior is specified on the code parameters and the scale of the code discrepancy. On simulated data, the IBF has been computed for several prior distributions. A confounding effect between the code discrepancy and the linear code is pointed out. Finally, the IBF is computed for an industrial computer code used for monitoring power plant production.


international symposium on biomedical imaging | 2008

Increased sensitivity in FMRI group analysis using mixed-effect modeling

Merlin Keller; Alexis Roche

In functional Magnetic Resonance Imaging group studies, uncertainties on the individual BOLD responses are not taken into account by standard detection procedures, which may limit their sensitivity. Mixed-effect models have been introduced to derive decision statistics that weight the subjects according to their reliability. To date, however, the associated statistical tests are almost not used by investigators, partly because they are inexact in that they control only approximately the false positive risk. We tackle this problem using a permutation testing framework that yields exact tests under mild nonparametric assumptions. This approach enables us to evaluate the sensitivity of mixed-effect statistics on a mental calculation experiment involving men and women.


Archive | 2008

DEALING WITH SPATIAL NORMALIZATION ERRORS IN fMRI GROUP INFERENCE USING HIERARCHICAL MODELING

Merlin Keller; Alexis Roche; Alan Tucholka; Bertrand Thirion


Quality and Reliability Engineering International | 2014

A Bayesian Methodology Applied to the Estimation of Earthquake Recurrence Parameters for Seismic Hazard Assessment

Merlin Keller; Alberto Pasanisi; Marine Marcilhac; Thierry Yalamas; Ramon Secanell; Gloria Senfaute

Collaboration


Dive into the Merlin Keller's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Éric Parent

Université Paris-Saclay

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alan Tucholka

French Institute for Research in Computer Science and Automation

View shared research outputs
Top Co-Authors

Avatar

Alexis Roche

French Institute for Research in Computer Science and Automation

View shared research outputs
Top Co-Authors

Avatar

Jean-Baptiste Poline

French Alternative Energies and Atomic Energy Commission

View shared research outputs
Researchain Logo
Decentralizing Knowledge