Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Amber M. Sprenger is active.

Publication


Featured researches published by Amber M. Sprenger.


Psychological Review | 2008

Diagnostic Hypothesis Generation and Human Judgment.

Rick P. Thomas; Michael R. Dougherty; Amber M. Sprenger; J. Isaiah Harbison

Diagnostic hypothesis-generation processes are ubiquitous in human reasoning. For example, clinicians generate disease hypotheses to explain symptoms and help guide treatment, auditors generate hypotheses for identifying sources of accounting errors, and laypeople generate hypotheses to explain patterns of information (i.e., data) in the environment. The authors introduce a general model of human judgment aimed at describing how people generate hypotheses from memory and how these hypotheses serve as the basis of probability judgment and hypothesis testing. In 3 simulation studies, the authors illustrate the properties of the model, as well as its applicability to explaining several common findings in judgment and decision making, including how errors and biases in hypothesis generation can cascade into errors and biases in judgment.


Psychonomic Bulletin & Review | 2014

When decision heuristics and science collide

Erica C. Yu; Amber M. Sprenger; Rick P. Thomas; Michael R. Dougherty

The ongoing discussion among scientists about null-hypothesis significance testing and Bayesian data analysis has led to speculation about the practices and consequences of “researcher degrees of freedom.” This article advances this debate by asking the broader questions that we, as scientists, should be asking: How do scientists make decisions in the course of doing research, and what is the impact of these decisions on scientific conclusions? We asked practicing scientists to collect data in a simulated research environment, and our findings show that some scientists use data collection heuristics that deviate from prescribed methodology. Monte Carlo simulations show that data collection heuristics based on p values lead to biases in estimated effect sizes and Bayes factors and to increases in both false-positive and false-negative rates, depending on the specific heuristic. We also show that using Bayesian data collection methods does not eliminate these biases. Thus, our study highlights the little appreciated fact that the process of doing science is a behavioral endeavor that can bias statistical description and inference in a manner that transcends adherence to any particular statistical framework.


Frontiers in Psychology | 2011

Implications of cognitive load for hypothesis generation and probability judgment.

Amber M. Sprenger; Michael R. Dougherty; Sharona M. Atkins; Ana M. Franco-Watkins; Rick P. Thomas; Nicholas D. Lange; Brandon Abbs

We tested the predictions of HyGene (Thomas et al., 2008) that both divided attention at encoding and judgment should affect the degree to which participants’ probability judgments violate the principle of additivity. In two experiments, we showed that divided attention during judgment leads to an increase in subadditivity, suggesting that the comparison process for probability judgments is capacity limited. Contrary to the predictions of HyGene, a third experiment revealed that divided attention during encoding leads to an increase in later probability judgment made under full attention. The effect of divided attention during encoding on judgment was completely mediated by the number of hypotheses participants generated, indicating that limitations in both encoding and recall can cascade into biases in judgments.


Experimental Psychology | 2014

Measuring working memory is all fun and games: a four-dimensional spatial game predicts cognitive task performance.

Sharona M. Atkins; Amber M. Sprenger; Gregory J. H. Colflesh; Timothy L. Briner; Jacob B. Buchanan; Sydnee E. Chavis; Sy-yu Chen; Gregory L. Iannuzzi; Vadim Kashtelyan; Eamon Dowling; J. Isaiah Harbison; Donald J. Bolger; Michael F. Bunting; Michael R. Dougherty

We developed a novel four-dimensional spatial task called Shapebuilder and used it to predict performance on a wide variety of cognitive tasks. In six experiments, we illustrate that Shapebuilder: (1) Loads on a common factor with complex working memory (WM) span tasks and that it predicts performance on quantitative reasoning tasks and Ravens Progressive Matrices (Experiment 1), (2) Correlates well with traditional complex WM span tasks (Experiment 2), predicts performance on the conditional go/no go task (Experiment 3) and N-back (Experiment 4), and showed weak or nonsignificant correlations with the Attention Networks Task (Experiment 5), and task switching (Experiment 6). Shapebuilder shows that it exhibits minimal skew and kurtosis, and shows good reliability. We argue that Shapebuilder has many advantages over existing measures of WM, including the fact that it is largely language independent, is not prone to ceiling effects, and take less than 6 min to complete on average.


Journal of Experimental Psychology: Learning, Memory and Cognition | 2012

Generating and evaluating options for decision making: The impact of sequentially presented evidence.

Amber M. Sprenger; Michael R. Dougherty

We examined how decision makers generate and evaluate hypotheses when data are presented sequentially. In the first 2 experiments, participants learned the relationship between data and possible causes of the data in a virtual environment. Data were then presented iteratively, and participants either generated hypotheses they thought caused the data or rated the probability of possible causes of the data. In a 3rd experiment, participants generated hypotheses and made probability judgments on the basis of previously stored general knowledge. Findings suggest that both the hypotheses one generates and the judged probability of those hypotheses are heavily influenced by the most recent evidence observed and by the diagnosticity of the evidence. Specifically, participants generated a narrow set of possible explanations when the presented evidence was diagnostic compared with when it was nondiagnostic, suggesting that nondiagnostic evidence entices participants to cast a wider net when generating hypotheses.


Psychonomic Bulletin & Review | 2014

Reply to Rouder (2014): Good frequentist properties raise confidence

Adam N. Sanborn; Thomas T. Hills; Michael R. Dougherty; Rick P. Thomas; Erica C. Yu; Amber M. Sprenger

Established psychological results have been called into question by demonstrations that statistical significance is easy to achieve, even in the absence of an effect. One often-warned-against practice, choosing when to stop the experiment on the basis of the results, is guaranteed to produce significant results. In response to these demonstrations, Bayes factors have been proposed as an antidote to this practice, because they are invariant with respect to how an experiment was stopped. Should researchers only care about the resulting Bayes factor, without concern for how it was produced? Yu, Sprenger, Thomas, and Dougherty (2014) and Sanborn and Hills (2014) demonstrated that Bayes factors are sometimes strongly influenced by the stopping rules used. However, Rouder (2014) has provided a compelling demonstration that despite this influence, the evidence supplied by Bayes factors remains correct. Here we address why the ability to influence Bayes factors should still matter to researchers, despite the correctness of the evidence. We argue that good frequentist properties mean that results will more often agree with researchers’ statistical intuitions, and good frequentist properties control the number of studies that will later be refuted. Both help raise confidence in psychological results.


Intelligence | 2013

Training working memory: Limits of transfer

Amber M. Sprenger; Sharona M. Atkins; Donald J. Bolger; J. Isaiah Harbison; Jared M. Novick; Jeffrey S. Chrabaszcz; Scott A. Weems; Vanessa Smith; Steven Bobb; Michael F. Bunting; Michael R. Dougherty


Journal of Experimental Psychology: General | 2006

The influence of improper sets of information on judgment: how irrelevant information can bias judged probability.

Michael R. Dougherty; Amber M. Sprenger


Organizational Behavior and Human Decision Processes | 2006

Differences between probability and frequency judgments: The role of individual differences in working memory capacity ☆

Amber M. Sprenger; Michael R. Dougherty


Medicine and Science in Sports and Exercise | 2014

Variability In Learning In Adults Explained By Cardiovascular Fitness, Physical Activity, And Apoe Genotype: 529 Board #3 May 28, 1

Maureen K. Kayes; Andrew C. Venezia; Amber M. Sprenger; Stephen M. Roth; Michael R. Dougherty; Donald J. Bolger; Bradley D. Hatfield

Collaboration


Dive into the Amber M. Sprenger's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge