Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alexander Ly is active.

Publication


Featured researches published by Alexander Ly.


Psychonomic Bulletin & Review | 2018

Bayesian inference for psychology. Part II: Example applications with JASP

Eric-Jan Wagenmakers; Jonathon Love; Maarten Marsman; Tahira Jamil; Alexander Ly; Josine Verhagen; Ravi Selker; Quentin Frederik Gronau; Damian Dropmann; Bruno Boutin; Frans Meerhoff; Patrick Knight; Akash Raj; Erik-Jan van Kesteren; Johnny van Doorn; Martin Šmíra; Sacha Epskamp; Alexander Etz; Dora Matzke; Tim de Jong; Don van den Bergh; Alexandra Sarafoglou; Helen Steingroever; Koen Derks; Jeffrey N. Rouder; Richard D. Morey

Bayesian hypothesis testing presents an attractive alternative to p value hypothesis testing. Part I of this series outlined several advantages of Bayesian hypothesis testing, including the ability to quantify evidence and the ability to monitor and update this evidence as data come in, without the need to know the intention with which the data were collected. Despite these and other practical advantages, Bayesian hypothesis tests are still reported relatively rarely. An important impediment to the widespread adoption of Bayesian tests is arguably the lack of user-friendly software for the run-of-the-mill statistical problems that confront psychologists for the analysis of almost every experiment: the t-test, ANOVA, correlation, regression, and contingency tables. In Part II of this series we introduce JASP (http://www.jasp-stats.org), an open-source, cross-platform, user-friendly graphical software package that allows users to carry out Bayesian hypothesis tests for standard statistical problems. JASP is based in part on the Bayesian analyses implemented in Morey and Rouder’s BayesFactor package for R. Armed with JASP, the practical advantages of Bayesian hypothesis testing are only a mouse click away.


Psychonomic Bulletin & Review | 2018

Bayesian inference for psychology. Part I: Theoretical advantages and practical ramifications

Eric-Jan Wagenmakers; Maarten Marsman; Tahira Jamil; Alexander Ly; Josine Verhagen; Jonathon Love; Ravi Selker; Quentin Frederik Gronau; Martin Šmíra; Sacha Epskamp; Dora Matzke; Jeffrey N. Rouder; Richard D. Morey

Bayesian parameter estimation and Bayesian hypothesis testing present attractive alternatives to classical inference using confidence intervals and p values. In part I of this series we outline ten prominent advantages of the Bayesian approach. Many of these advantages translate to concrete opportunities for pragmatic researchers. For instance, Bayesian hypothesis testing allows researchers to quantify evidence and monitor its progression as data come in, without needing to know the intention with which the data were collected. We end by countering several objections to Bayesian hypothesis testing. Part II of this series discusses JASP, a free and open source software program that makes it easy to conduct Bayesian estimation and testing for a range of popular statistical scenarios (Wagenmakers et al. this issue).


Behavior Research Methods | 2015

A power fallacy

Eric-Jan Wagenmakers; Josine Verhagen; Alexander Ly; Marjan Bakker; Michael D. Lee; Dora Matzke; Jeffrey N. Rouder; Richard D. Morey

The power fallacy refers to the misconception that what holds on average –across an ensemble of hypothetical experiments– also holds for each case individually. According to the fallacy, high-power experiments always yield more informative data than do low-power experiments. Here we expose the fallacy with concrete examples, demonstrating that a particular outcome from a high-power experiment can be completely uninformative, whereas a particular outcome from a low-power experiment can be highly informative. Although power is useful in planning an experiment, it is less useful—and sometimes even misleading—for making inferences from observed data. To make inferences from data, we recommend the use of likelihood ratios or Bayes factors, which are the extension of likelihood ratios beyond point hypotheses. These methods of inference do not average over hypothetical replications of an experiment, but instead condition on the data that have actually been observed. In this way, likelihood ratios and Bayes factors rationally quantify the evidence that a particular data set provides for or against the null or any other hypothesis.


Behavior Research Methods | 2016

How to quantify the evidence for the absence of a correlation

Eric-Jan Wagenmakers; Josine Verhagen; Alexander Ly

We present a suite of Bayes factor hypothesis tests that allow researchers to grade the decisiveness of the evidence that the data provide for the presence versus the absence of a correlation between two variables. For concreteness, we apply our methods to the recent work of Donnellan et al. (in press) who conducted nine replication studies with over 3,000 participants and failed to replicate the phenomenon that lonely people compensate for a lack of social warmth by taking warmer baths or showers. We show how the Bayes factor hypothesis test can quantify evidence in favor of the null hypothesis, and how the prior specification for the correlation coefficient can be used to define a broad range of tests that address complementary questions. Specifically, we show how the prior specification can be adjusted to create a two-sided test, a one-sided test, a sensitivity analysis, and a replication test.


Behavior Research Methods | 2017

Default “Gunel and Dickey” Bayes factors for contingency tables

Tahira Jamil; Alexander Ly; Richard D. Morey; Jonathon Love; Maarten Marsman; Eric-Jan Wagenmakers

The analysis of R×C contingency tables usually features a test for independence between row and column counts. Throughout the social sciences, the adequacy of the independence hypothesis is generally evaluated by the outcome of a classical p-value null-hypothesis significance test. Unfortunately, however, the classical p-value comes with a number of well-documented drawbacks. Here we outline an alternative, Bayes factor method to quantify the evidence for and against the hypothesis of independence in R×C contingency tables. First we describe different sampling models for contingency tables and provide the corresponding default Bayes factors as originally developed by Gunel and Dickey (Biometrika, 61(3):545–557 (1974)). We then illustrate the properties and advantages of a Bayes factor analysis of contingency tables through simulations and practical examples. Computer code is available online and has been incorporated in the “BayesFactor” R package and the JASP program (jasp-stats.org).


Frontiers in Psychology | 2015

Turning the hands of time again: a purely confirmatory replication study and a Bayesian analysis

Eric-Jan Wagenmakers; Titia Beek; Mark Rotteveel; Alex Gierholz; Dora Matzke; Helen Steingroever; Alexander Ly; Josine Verhagen; Ravi Selker; Adam Sasiadek; Quentin Frederik Gronau; Jonathon Love; Yair Pinto

In a series of four experiments, Topolinski and Sparenberg (2012) found support for the conjecture that clockwise movements induce psychological states of temporal progression and an orientation toward the future and novelty. Here we report the results of a preregistered replication attempt of Experiment 2 from Topolinski and Sparenberg (2012). Participants turned kitchen rolls either clockwise or counterclockwise while answering items from a questionnaire assessing openness to experience. Data from 102 participants showed that the effect went slightly in the direction opposite to that predicted by Topolinski and Sparenberg (2012), and a preregistered Bayes factor hypothesis test revealed that the data were 10.76 times more likely under the null hypothesis than under the alternative hypothesis. Our findings illustrate the theoretical importance and practical advantages of preregistered Bayes factor replication studies, both for psychological science and for empirical work in general.


The American Statistician | 2016

Bayesian Inference for Kendall’s Rank Correlation Coefficient

Johnny van Doorn; Alexander Ly; Maarten Marsman; Eric-Jan Wagenmakers

ABSTRACT This article outlines a Bayesian methodology to estimate and test the Kendall rank correlation coefficient τ. The nonparametric nature of rank data implies the absence of a generative model and the lack of an explicit likelihood function. These challenges can be overcome by modeling test statistics rather than data. We also introduce a method for obtaining a default prior distribution. The combined result is an inferential methodology that yields a posterior distribution for Kendall’s τ.


Statistica Neerlandica | 2018

Analytic posteriors for Pearson's correlation coefficient

Alexander Ly; Maarten Marsman; Eric-Jan Wagenmakers

Pearsons correlation is one of the most common measures of linear dependence. Recently, Bernardo (11th International Workshop on Objective Bayes Methodology, 2015) introduced a flexible class of priors to study this measure in a Bayesian setting. For this large class of priors, we show that the (marginal) posterior for Pearsons correlation coefficient and all of the posterior moments are analytic. Our results are available in the open‐source software package JASP.


Attention Perception & Psychophysics | 2017

A test of the diffusion model explanation for the worst performance rule using preregistration and blinding

Gilles Dutilh; Joachim Vandekerckhove; Alexander Ly; Dora Matzke; Andreas Pedroni; Renato Frey; Jörg Rieskamp; Eric-Jan Wagenmakers

People with higher IQ scores also tend to perform better on elementary cognitive-perceptual tasks, such as deciding quickly whether an arrow points to the left or the right Jensen (2006). The worst performance rule (WPR) finesses this relation by stating that the association between IQ and elementary-task performance is most pronounced when this performance is summarized by people’s slowest responses. Previous research has shown that the WPR can be accounted for in the Ratcliff diffusion model by assuming that the same ability parameter—drift rate—mediates performance in both elementary tasks and higher-level cognitive tasks. Here we aim to test four qualitative predictions concerning the WPR and its diffusion model explanation in terms of drift rate. In the first stage, the diffusion model was fit to data from 916 participants completing a perceptual two-choice task; crucially, the fitting happened after randomly shuffling the key variable, i.e., each participant’s score on a working memory capacity test. In the second stage, after all modeling decisions were made, the key variable was unshuffled and the adequacy of the predictions was evaluated by means of confirmatory Bayesian hypothesis tests. By temporarily withholding the mapping of the key predictor, we retain flexibility for proper modeling of the data (e.g., outlier exclusion) while preventing biases from unduly influencing the results. Our results provide evidence against the WPR and suggest that it may be less robust and less ubiquitous than is commonly believed.


Journal of Mathematical Psychology | 2017

A Tutorial on Bridge Sampling

Quentin Frederik Gronau; Alexandra Sarafoglou; Dora Matzke; Alexander Ly; Udo Boehm; Maarten Marsman; David S. Leslie; Jonathon J. Forster; Eric-Jan Wagenmakers; Helen Steingroever

The marginal likelihood plays an important role in many areas of Bayesian statistics such as parameter estimation, model comparison, and model averaging. In most applications, however, the marginal likelihood is not analytically tractable and must be approximated using numerical methods. Here we provide a tutorial on bridge sampling (Bennett, 1976; Meng & Wong, 1996), a reliable and relatively straightforward sampling method that allows researchers to obtain the marginal likelihood for models of varying complexity. First, we introduce bridge sampling and three related sampling methods using the beta-binomial model as a running example. We then apply bridge sampling to estimate the marginal likelihood for the Expectancy Valence (EV) model—a popular model for reinforcement learning. Our results indicate that bridge sampling provides accurate estimates for both a single participant and a hierarchical version of the EV model. We conclude that bridge sampling is an attractive method for mathematical psychologists who typically aim to approximate the marginal likelihood for a limited set of possibly high-dimensional models.

Collaboration


Dive into the Alexander Ly's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dora Matzke

University of Amsterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ravi Selker

University of Amsterdam

View shared research outputs
Top Co-Authors

Avatar

Tahira Jamil

University of Amsterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge