Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Udo Boehm is active.

Publication


Featured researches published by Udo Boehm.


Psychonomic Bulletin & Review | 2016

Of monkeys and men: Impatience in perceptual decision-making

Udo Boehm; Guy E. Hawkins; Scott D. Brown; Hedderik van Rijn; Eric-Jan Wagenmakers

For decades sequential sampling models have successfully accounted for human and monkey decision-making, relying on the standard assumption that decision makers maintain a pre-set decision standard throughout the decision process. Based on the theoretical argument of reward rate maximization, some authors have recently suggested that decision makers become increasingly impatient as time passes and therefore lower their decision standard. Indeed, a number of studies show that computational models with an impatience component provide a good fit to human and monkey decision behavior. However, many of these studies lack quantitative model comparisons and systematic manipulations of rewards. Moreover, the often-cited evidence from single-cell recordings is not unequivocal and complimentary data from human subjects is largely missing. We conclude that, despite some enthusiastic calls for the abandonment of the standard model, the idea of an impatience component has yet to be fully established; we suggest a number of recently developed tools that will help bring the debate to a conclusive settlement.


Psychonomic Bulletin & Review | 2018

Bayesian Inference for Psychology, Part III: Parameter Estimation in Nonstandard Models

Dora Matzke; Udo Boehm; Joachim Vandekerckhove

We demonstrate the use of three popular Bayesian software packages that enable researchers to estimate parameters in a broad class of models that are commonly used in psychological research. We focus on WinBUGS, JAGS, and Stan, and show how they can be interfaced from R and MATLAB. We illustrate the use of the packages through two fully worked examples; the examples involve a simple univariate linear regression and fitting a multinomial processing tree model to data from a classic false-memory experiment. We conclude with a comparison of the strengths and weaknesses of the packages. Our example code, data, and this text are available via https://osf.io/ucmaz/.


Journal of Mathematical Psychology | 2017

A Tutorial on Bridge Sampling

Quentin Frederik Gronau; Alexandra Sarafoglou; Dora Matzke; Alexander Ly; Udo Boehm; Maarten Marsman; David S. Leslie; Jonathon J. Forster; Eric-Jan Wagenmakers; Helen Steingroever

The marginal likelihood plays an important role in many areas of Bayesian statistics such as parameter estimation, model comparison, and model averaging. In most applications, however, the marginal likelihood is not analytically tractable and must be approximated using numerical methods. Here we provide a tutorial on bridge sampling (Bennett, 1976; Meng & Wong, 1996), a reliable and relatively straightforward sampling method that allows researchers to obtain the marginal likelihood for models of varying complexity. First, we introduce bridge sampling and three related sampling methods using the beta-binomial model as a running example. We then apply bridge sampling to estimate the marginal likelihood for the Expectancy Valence (EV) model—a popular model for reinforcement learning. Our results indicate that bridge sampling provides accurate estimates for both a single participant and a hierarchical version of the EV model. We conclude that bridge sampling is an attractive method for mathematical psychologists who typically aim to approximate the marginal likelihood for a limited set of possibly high-dimensional models.


Behavior Research Methods | 2018

Using Bayesian regression to test hypotheses about relationships between parameters and covariates in cognitive models.

Udo Boehm; Helen Steingroever; Eric-Jan Wagenmakers

An important tool in the advancement of cognitive science are quantitative models that represent different cognitive variables in terms of model parameters. To evaluate such models, their parameters are typically tested for relationships with behavioral and physiological variables that are thought to reflect specific cognitive processes. However, many models do not come equipped with the statistical framework needed to relate model parameters to covariates. Instead, researchers often revert to classifying participants into groups depending on their values on the covariates, and subsequently comparing the estimated model parameters between these groups. Here we develop a comprehensive solution to the covariate problem in the form of a Bayesian regression framework. Our framework can be easily added to existing cognitive models and allows researchers to quantify the evidential support for relationships between covariates and model parameters using Bayes factors. Moreover, we present a simulation study that demonstrates the superiority of the Bayesian regression framework to the conventional classification-based approach.


Scientific Reports | 2017

The computations that support simple decision-making: A comparison between the diffusion and urgency-gating models

Nathan J. Evans; Guy E. Hawkins; Udo Boehm; Eric-Jan Wagenmakers; Scott D. Brown

We investigate a question relevant to the psychology and neuroscience of perceptual decision-making: whether decisions are based on steadily accumulating evidence, or only on the most recent evidence. We report an empirical comparison between two of the most prominent examples of these theoretical positions, the diffusion model and the urgency-gating model, via model-based qualitative and quantitative comparisons. Our findings support the predictions of the diffusion model over the urgency-gating model, and therefore, the notion that evidence accumulates without much decay. Gross qualitative patterns and fine structural details of the data are inconsistent with the notion that decisions are based only on the most recent evidence. More generally, we discuss some strengths and weaknesses of scientific methods that investigate quantitative models by distilling the formal models to qualitative predictions.


Behavior Research Methods | 2018

On the importance of avoiding shortcuts in applying cognitive models to hierarchical data

Udo Boehm; Maarten Marsman; Dora Matzke; Eric-Jan Wagenmakers

Psychological experiments often yield data that are hierarchically structured. A number of popular shortcut strategies in cognitive modeling do not properly accommodate this structure and can result in biased conclusions. To gauge the severity of these biases, we conducted a simulation study for a two-group experiment. We first considered a modeling strategy that ignores the hierarchical data structure. In line with theoretical results, our simulations showed that Bayesian and frequentist methods that rely on this strategy are biased towards the null hypothesis. Secondly, we considered a modeling strategy that takes a two-step approach by first obtaining participant-level estimates from a hierarchical cognitive model and subsequently using these estimates in a follow-up statistical test. Methods that rely on this strategy are biased towards the alternative hypothesis. Only hierarchical models of the multilevel data lead to correct conclusions. Our results are particularly relevant for the use of hierarchical Bayesian parameter estimates in cognitive modeling.


NeuroImage | 2014

Trial-by-trial fluctuations in CNV amplitude reflect anticipatory adjustment of response caution.

Udo Boehm; Leendert van Maanen; Birte U. Forstmann; Hedderik van Rijn


Archive | 2018

Estimating Between-Trial Variability Parameters of the Diffusion Decision Model

Udo Boehm; Jeff Annis; Michael J. Frank; Guy E. Hawkins; Andrew Heathcote; David Kellen; Angelos-Miltiadis Krypotos; Veronika Lerche; Gordon D. Logan; Thomas J. Palmeri


Journal of Mathematical Psychology | 2018

Estimating Across-Trial Variability Parameters of the Diffusion Decision Model: Expert Advice and Recommendations

Udo Boehm; Jeffrey Annis; Michael J. Frank; Guy E. Hawkins; Andrew Heathcote; David Kellen; Angelos-Miltiadis Krypotos; Veronika Lerche; Gordon D. Logan; Thomas J. Palmeri; Don van Ravenzwaaij; Mathieu Servant; Henrik Singmann; Jeffrey J. Starns; Andreas Voss; Thomas V. Wiecki; Dora Matzke; Eric-Jan Wagenmakers


Archive | 2016

Estimating Between-Trial DDM Parameters

Udo Boehm; Eric-Jan Wagenmakers; Dora Matzke; Mathieu Servant; Jeff Annis; Guy E. Hawkins; Andrew Heathcote; Angelos-Miltiadis Krypotos; Don van Ravenzwaaij; Andreas Voss

Collaboration


Dive into the Udo Boehm's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dora Matzke

University of Amsterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge