Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jesse Chandler is active.

Publication


Featured researches published by Jesse Chandler.


Current Directions in Psychological Science | 2014

Inside the Turk: Understanding Mechanical Turk as a Participant Pool

Gabriele Paolacci; Jesse Chandler

Mechanical Turk (MTurk), an online labor market created by Amazon, has recently become popular among social scientists as a source of survey and experimental data. The workers who populate this market have been assessed on dimensions that are universally relevant to understanding whether, why, and when they should be recruited as research participants. We discuss the characteristics of MTurk as a participant pool for psychology and other social sciences, highlighting the traits of the MTurk samples, why people become MTurk workers and research participants, and how data quality on MTurk compares to that from other pools and depends on controllable and uncontrollable factors.


Clinical psychological science | 2013

Using Mechanical Turk to Study Clinical Populations

Danielle N. Shapiro; Jesse Chandler; Pam Mueller

Although participants with psychiatric symptoms, specific risk factors, or rare demographic characteristics can be difficult to identify and recruit for participation in research, participants with these characteristics are crucial for research in the social, behavioral, and clinical sciences. Online research in general and crowdsourcing software in particular may offer a solution. However, no research to date has examined the utility of crowdsourcing software for conducting research on psychopathology. In the current study, we examined the prevalence of several psychiatric disorders and related problems, as well as the reliability and validity of participant reports on these domains, among users of Amazon’s Mechanical Turk. Findings suggest that crowdsourcing software offers several advantages for clinical research while providing insight into potential problems, such as misrepresentation, that researchers should address when collecting data online.


Behavior Research Methods | 2014

Nonnaïveté among Amazon Mechanical Turk workers: consequences and solutions for behavioral researchers.

Jesse Chandler; Pam Mueller; Gabriele Paolacci

Crowdsourcing services—particularly Amazon Mechanical Turk—have made it easy for behavioral scientists to recruit research participants. However, researchers have overlooked crucial differences between crowdsourcing and traditional recruitment methods that provide unique opportunities and challenges. We show that crowdsourced workers are likely to participate across multiple related experiments and that researchers are overzealous in the exclusion of research participants. We describe how both of these problems can be avoided using advanced interface features that also allow prescreening and longitudinal data collection. Using these techniques can minimize the effects of previously ignored drawbacks and expand the scope of crowdsourcing as a tool for psychological research.


Annual Review of Clinical Psychology | 2016

Conducting Clinical Research Using Crowdsourced Convenience Samples

Jesse Chandler; Danielle N. Shapiro

Crowdsourcing has had a dramatic impact on the speed and scale at which scientific research can be conducted. Clinical scientists have particularly benefited from readily available research study participants and streamlined recruiting and payment systems afforded by Amazon Mechanical Turk (MTurk), a popular labor market for crowdsourcing workers. MTurk has been used in this capacity for more than five years. The popularity and novelty of the platform have spurred numerous methodological investigations, making it the most studied nonprobability sample available to researchers. This article summarizes what is known about MTurk sample composition and data quality with an emphasis on findings relevant to clinical psychological research. It then addresses methodological issues with using MTurk--many of which are common to other nonprobability samples but unfamiliar to clinical science researchers--and suggests concrete steps to avoid these issues or minimize their impact.


Science | 2016

Response to Comment on "Estimating the reproducibility of psychological science"

Christopher Jon Anderson; Štěpán Bahník; Michael Barnett-Cowan; Frank A. Bosco; Jesse Chandler; Christopher R. Chartier; Felix Cheung; Cody D. Christopherson; Andreas Cordes; Edward Cremata; Nicolás Della Penna; Vivien Estel; Anna Fedor; Stanka A. Fitneva; Michael C. Frank; James A. Grange; Joshua K. Hartshorne; Fred Hasselman; Felix Henninger; Marije van der Hulst; Kai J. Jonas; Calvin Lai; Carmel A. Levitan; Jeremy K. Miller; Katherine Sledge Moore; Johannes Meixner; Marcus R. Munafò; Koen Ilja Neijenhuijs; Gustav Nilsonne; Brian A. Nosek

Gilbert et al. conclude that evidence from the Open Science Collaboration’s Reproducibility Project: Psychology indicates high reproducibility, given the study methodology. Their very optimistic assessment is limited by statistical misconceptions and by causal inferences from selectively interpreted, correlational data. Using the Reproducibility Project: Psychology data, both optimistic and pessimistic conclusions about reproducibility are possible, and neither are yet warranted.


Handbook of Human Computation | 2013

Risks and Rewards of Crowdsourcing Marketplaces

Jesse Chandler; Gabriele Paolacci; Pam Mueller

Crowdsourcing has become an increasingly popular means of flexibly deploying large amounts of human computational power. The present chapter investigates the role of microtask labor marketplaces in managing human and hybrid human machine computing. Labor marketplaces offer many advantages that in combination allow human intelligence to be allocated across projects rapidly and efficiently and information to be transmitted effectively between market participants. Human computation comes with a set of challenges that are distinct from machine computation, including increased unsystematic error (e.g. mistakes) and systematic error (e.g. cognitive biases), both of which can be exacerbated when motivation is low, incentives are misaligned, and task requirements are poorly communicated. We provide specific guidance about how to ameliorate these issues through task design, workforce selection, data cleaning and aggregation.


Social Psychological and Personality Science | 2017

Lie for a Dime: When Most Prescreening Responses Are Honest but Most Study Participants Are Impostors

Jesse Chandler; Gabriele Paolacci

The Internet has enabled recruitment of large samples with specific characteristics. However, when researchers rely on participant self-report to determine eligibility, data quality depends on participant honesty. Across four studies on Amazon Mechanical Turk, we show that a substantial number of participants misrepresent theoretically relevant characteristics (e.g., demographics, product ownership) to meet eligibility criteria explicit in the studies, inferred by a previous exclusion from the study or inferred in previous experiences with similar studies. When recruiting rare populations, a large proportion of responses can be impostors. We provide recommendations about how to ensure that ineligible participants are excluded that are applicable to a wide variety of data collection efforts, which rely on self-report.


Psychological Science | 2012

Fast Thought Speed Induces Risk Taking

Jesse Chandler; Emily Pronin

In two experiments, we tested for a causal link between thought speed and risk taking. In Experiment 1, we manipulated thought speed by presenting neutral-content text at either a fast or a slow pace and having participants read the text aloud. In Experiment 2, we manipulated thought speed by presenting fast-, medium-, or slow-paced movie clips that contained similar content. Participants who were induced to think more quickly took more risks with actual money in Experiment 1 and reported greater intentions to engage in real-world risky behaviors, such as unprotected sex and illegal drug use, in Experiment 2. These experiments provide evidence that faster thinking induces greater risk taking.


Trends in Cognitive Sciences | 2017

Crowdsourcing Samples in Cognitive Science

Neil Stewart; Jesse Chandler; Gabriele Paolacci

Crowdsourcing data collection from research participants recruited from online labor markets is now common in cognitive science. We review who is in the crowd and who can be reached by the average laboratory. We discuss reproducibility and review some recent methodological innovations for online experiments. We consider the design of research studies and arising ethical issues. We review how to code experiments for the web, what is known about video and audio presentation, and the measurement of reaction times. We close with comments about the high levels of experience of many participants and an emerging tragedy of the commons.


Consciousness and Cognition | 2013

Lost in the Crowd: Entitative Group Membership Reduces Mind Attribution

Carey K. Morewedge; Jesse Chandler; Robert Smith; Norbert Schwarz; Jonathan W. Schooler

This research examined how and why group membership diminishes the attribution of mind to individuals. We found that mind attribution was inversely related to the size of the group to which an individual belonged (Experiment 1). Mind attribution was affected by group membership rather than the total number of entities perceived at once (Experiment 2). Moreover, mind attribution to an individual varied with the perception that the individual was a group member. Participants attributed more mind to an individual that appeared distinct or distant from other group members than to an individual that was perceived to be similar or proximal to a cohesive group (Experiments 3 and 4). This effect occurred for both human and nonhuman targets, and was driven by the perception of the target as an entitative group member rather than by the knowledge that the target was an entitative group member (Experiment 5).

Collaboration


Dive into the Jesse Chandler's collaboration.

Top Co-Authors

Avatar

Gabriele Paolacci

Erasmus University Rotterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Norbert Schwarz

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fred Hasselman

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge