Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Julia M. Haaf is active.

Publication


Featured researches published by Julia M. Haaf.


Journal of Experimental Psychology: General | 2016

Subliminal evaluative conditioning? Above-chance CS identification may be necessary and insufficient for attitude learning.

Christoph Stahl; Julia M. Haaf; Olivier Corneille

Previous research has claimed that evaluative conditioning (EC) effects may obtain in the absence of perceptual identification of conditioned stimuli (CSs). A recent meta-analysis suggested similar effect sizes for supra- and subliminal CSs, but this was based on a small body of evidence (k = 8 studies; Hofmann, De Houwer, Perugini, Baeyens, & Crombez, 2010). We critically discuss this prior evidence, and then report and discuss 6 experimental studies that investigate EC effects for briefly presented CSs using more stringent methods. Across these studies, we varied CS duration, the presence or absence of masking, the presence or absence of a CS identification check, CS material, and the instructions communicated to participants. EC effects for longer-duration CSs were modulated by attention to the CS-US pairing. Across studies, we were consistently unable to obtain EC for briefly presented CSs. In most studies, this pattern was observed despite above-chance perceptual identification of the CSs. A meta-analysis conducted across the 27 experimental conditions supported the null hypothesis of no EC for perceptually unidentified CSs. We conclude that EC effects for briefly presented and masked CSs are either not robust, are very small, or are limited to specific conditions that remain to be identified (or any combination of these). (PsycINFO Database Record


Psychonomic Bulletin & Review | 2018

Bayesian inference for psychology, part IV: parameter estimation and Bayes factors

Jeffrey N. Rouder; Julia M. Haaf; Joachim Vandekerckhove

In the psychological literature, there are two seemingly different approaches to inference: that from estimation of posterior intervals and that from Bayes factors. We provide an overview of each method and show that a salient difference is the choice of models. The two approaches as commonly practiced can be unified with a certain model specification, now popular in the statistics literature, called spike-and-slab priors. A spike-and-slab prior is a mixture of a null model, the spike, with an effect model, the slab. The estimate of the effect size here is a function of the Bayes factor, showing that estimation and model comparison can be unified. The salient difference is that common Bayes factor approaches provide for privileged consideration of theoretically useful parameter values, such as the value corresponding to the null hypothesis, while estimation approaches do not. Both approaches, either privileging the null or not, are useful depending on the goals of the analyst.


Psychological Methods | 2017

Developing constraint in Bayesian mixed models.

Julia M. Haaf; Jeffrey N. Rouder

Model comparison in Bayesian mixed models is becoming popular in psychological science. Here we develop a set of nested models that account for order restrictions across individuals in psychological tasks. An order-restricted model addresses the question “Does everybody,” as in “Does everybody show the usual Stroop effect,” or “Does everybody respond more quickly to intense noises than subtle ones?” The crux of the modeling is the instantiation of 10s or 100s of order restrictions simultaneously, one for each participant. To our knowledge, the problem is intractable in frequentist contexts but relatively straightforward in Bayesian ones. We develop a Bayes factor model-comparison strategy using Zellner and Siow’s default g-priors appropriate for assessing whether effects obey equality and order restrictions. We apply the methodology to seven data sets from Stroop, Simon, and Eriksen interference tasks. Not too surprisingly, we find that everybody Stroops—that is, for all people congruent colors are truly named more quickly than incongruent ones. But, perhaps surprisingly, we find these order constraints are violated for some people in the Simon task, that is, for these people spatially incongruent responses occur truly more quickly than congruent ones! Implications of the modeling and conjectures about the task-related differences are discussed.


Communication Monographs | 2018

From theories to models to predictions: A Bayesian model comparison approach

Jeffrey N. Rouder; Julia M. Haaf; Frederik Aust

ABSTRACT A key goal in research is to use data to assess competing hypotheses or theories. An alternative to the conventional significance testing is Bayesian model comparison. The main idea is that competing theories are represented by statistical models. In the Bayesian framework, these models then yield predictions about data even before the data are seen. How well the data match the predictions under competing models may be calculated, and the ratio of these matches – the Bayes factor – is used to assess the evidence for one model compared to another. We illustrate the process of going from theories to models and to predictions in the context of two hypothetical examples about how exposure to media affects attitudes toward refugees.


Advances in Methods and Practices in Psychological Science | 2018

Power, Dominance, and Constraint: A Note on the Appeal of Different Design Traditions

Jeffrey N. Rouder; Julia M. Haaf

The recent field-wide emphasis on power has brought the number of participants used in psychological experiments into focus. Social psychology typically follows a tradition in which many participants perform a small number of trials each; in psychophysics, the tradition is to include only a few participants, who perform many trials each; and the tradition in cognitive psychology falls in between, balancing the number of participants and trials. We ask whether it is better to add trials or to add participants if one wishes to increase power. The answer is straightforward—greatest power is achieved by using more people, and the gain from adding people is greater than the gain from adding trials. In light of these results, the design parameters in the social psychology tradition seem ideal. Yet there are conditions in which one may trade people for trials with only a minor decrement in power. Under these conditions, the limiting factor is the trial-to-trial variability rather than the variability across people in the population. These conditions are highly plausible, and we present a theoretical argument as to why. We think that most cognitive effects are characterized by stochastic dominance; that is, everyone’s true effect is in the same direction. For example, it is plausible that when performing the Stroop task, all people truly identify congruent colors faster than incongruent ones. When dominance holds, small mean effects imply a small degree of variability across the population. It is this degree of homogeneity, the consequence of dominance, that licenses the design parameters of the cognitive psychology and psychophysics traditions.


Cognition & Emotion | 2018

Of two minds or one? A registered replication of Rydell et al. (2006)

Tobias Heycke; Sarah Marie Gehrmann; Julia M. Haaf; Christoph Stahl

ABSTRACT Evaluative conditioning (EC) is proposed as a mechanism of automatic preference acquisition in dual-process theories of attitudes (Gawronski, B., & Bodenhausen, G. V. (2006). Associative and propositional processes in evaluation: An integrative review of implicit and explicit attitude change. Psychological Bulletin, 132(5), 692–731. doi:10.1037/0033-2909.132.5.692). Evidence for the automaticity of EC comes from studies claiming EC effects for subliminally presented stimuli. An impression-formation study showed a selective influence of briefly presented primes on implicitly measured attitudes, whereas supraliminally presented behavioural information about the target person was reflected in explicit ratings (Rydell, R. J., McConnell, A. R., Mackie, D. M., & Strain, L. M. (2006). Of two minds forming and changing valence-inconsistent implicit and explicit attitudes. Psychological Science, 17(11), 954–958. doi:10.1111/j.1467-9280.2006.01811.x) This finding is considered one of the strongest pieces of evidence for dual process theories (Sweldens, S., Corneille, O., & Yzerbyt, V. (2014). The role of awareness in attitude formation through evaluative conditioning. Personality and Social Psychology Review, 18(2), 187–209. doi:10.1177/1088868314527832), and it is therefore crucial to assess its reliability and robustness. The present study presents two registered replications of the Rydell et al. (2006) study. In contrast to the original findings, the implicit measures did not reflect the valence of the subliminal primes in both studies.


Journal of Experimental Psychology: Learning, Memory and Cognition | 2018

A memory-based judgment account of expectancy-liking dissociations in evaluative conditioning

Frederik Aust; Julia M. Haaf; Christoph Stahl

Evaluative conditioning (EC) is a change in liking of neutral conditioned stimuli (CS) following pairings with positive or negative stimuli (unconditioned stimulus, US). A dissociation has been reported between US expectancy and CS evaluation in extinction learning: When CSs are presented alone subsequent to CS-US pairings, participants cease to expect USs but continue to exhibit EC effects. This dissociation is typically interpreted as demonstration that EC is resistant to extinction, and consequently, that EC is driven by a distinct learning process. We tested whether expectancy-liking dissociations are instead caused by different judgment strategies afforded by the dependent measures: CS evaluations are by default integrative judgments—summaries of large portions of the learning history—whereas US expectancy reflects momentary judgments that focus on recent events. In a counterconditioning and two extinction experiments, we eliminated the expectancy-liking dissociation by inducing nondefault momentary evaluative judgments, and demonstrated a reversed dissociation when we additionally induced nondefault integrative expectancy judgments. Our findings corroborated a priori predictions derived from the formal memory model MINERVA 2. Hence, dissociations between US expectancy and CS evaluation are consistent with a single-process learning model; they reflect different summaries of the learning history.


Psychonomic Bulletin & Review | 2018

Some do and some don’t? Accounting for variability of individual difference structures.

Julia M. Haaf; Jeffrey N. Rouder

A prevailing notion in experimental psychology is that individuals’ performance in a task varies gradually in a continuous fashion. In a Stroop task, for example, the true average effect may be 50 ms with a standard deviation of say 30 ms. In this case, some individuals will have greater effects than 50 ms, some will have smaller, and some are forecasted to have negative effects in sign—they respond faster to incongruent items than to congruent ones! But are there people who have a true negative effect in Stroop or any other task? We highlight three qualitatively different effects: negative effects, null effects, and positive effects. The main goal of this paper is to develop models that allow researchers to explore whether all three are present in a task: Do all individuals show a positive effect? Are there individuals with truly no effect? Are there any individuals with negative effects? We develop a family of Bayesian hierarchical models that capture a variety of these constraints. We apply this approach to Stroop interference experiments and a near-liminal priming experiment where the prime may be below and above threshold for different people. We show that most tasks people are quite alike—for example everyone has positive Stroop effects and nobody fails to Stroop or Stroops negatively. We also show a case that under very specific circumstances, we could entice some people to not Stroop at all.


Advances in Methods and Practices in Psychological Science | 2018

Bayesian inference and testing any hypothesis you can specify

Alexander Etz; Julia M. Haaf; Jeffrey N. Rouder; Joachim Vandekerckhove

Hypothesis testing is a special form of model selection. Once a pair of competing models is fully defined, their definition immediately leads to a measure of how strongly each model supports the data. The ratio of their support is often called the likelihood ratio or the Bayes factor. Critical in the model-selection endeavor is the specification of the models. In the case of hypothesis testing, it is of the greatest importance that the researcher specify exactly what is meant by a “null” hypothesis as well as the alternative to which it is contrasted, and that these are suitable instantiations of theoretical positions. Here, we provide an overview of different instantiations of null and alternative hypotheses that can be useful in practice, but in all cases the inferential procedure is based on the same underlying method of likelihood comparison. An associated app can be found at https://osf.io/mvp53/. This article is the work of the authors and is reformatted from the original, which was published under a CC-By Attribution 4.0 International license and is available at https://psyarxiv.com/wmf3r/.


Journal of Mathematical Psychology | 2017

Is there variation across individuals in processing? Bayesian analysis for systems factorial technology

Jonathan E. Thiele; Julia M. Haaf; Jeffrey N. Rouder

Collaboration


Dive into the Julia M. Haaf's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexander Etz

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Olivier Corneille

Université catholique de Louvain

View shared research outputs
Researchain Logo
Decentralizing Knowledge