Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Cade Massey is active.

Publication


Featured researches published by Cade Massey.


Management Science | 2005

Detecting Regime Shifts: The Causes of Under- and Overreaction

Cade Massey; George Wu

Many decision makers operate in dynamic environments in which markets, competitors, and technology change regularly. The ability to detect and respond to these regime shifts is critical for economic success. We conduct three experiments to test how effective individuals are at detecting such regime shifts. Specifically, we investigate when individuals are most likely to underreact to change and when they are most likely to overreact to it. We develop asystem-neglect hypothesis: Individuals react primarily to the signals they observe and secondarily to the environmental system that produced the signal. The experiments, two involving probability estimation and one involving prediction, reveal a behavioral pattern consistent with our system-neglect hypothesis: Underreaction is most common in unstable environments with precise signals, and overreaction is most common in stable environments with noisy signals. We test this pattern formally in a statistical comparison of the Bayesian model with a parametric specification of the system-neglect model.


Journal of Experimental Psychology: General | 2015

Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them Err

Berkeley J. Dietvorst; Joseph P. Simmons; Cade Massey

Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.


Psychological Science | 2011

Hope Over Experience: Desirability and the Persistence of Optimism

Cade Massey; Joseph P. Simmons; David A. Armor

Many important decisions hinge on expectations of future outcomes. Decisions about health, investments, and relationships all depend on predictions of the future. These expectations are often optimistic: People frequently believe that their preferred outcomes are more likely than is merited. Yet it is unclear whether optimism persists with experience and, surprisingly, whether optimism is truly caused by desire. These are important questions because life’s most consequential decisions often feature both strong preferences and the opportunity to learn. We investigated these questions by collecting football predictions from National Football League fans during each week of the 2008 season. Despite accuracy incentives and extensive feedback, predictions about preferred teams remained optimistically biased through the entire season. Optimism was as strong after 4 months as it was after 4 weeks. We exploited variation in preferences and matchups to show that desirability fueled this optimistic bias.


Management Science | 2013

The Loser's Curse: Decision Making and Market Efficiency in the National Football League Draft

Cade Massey; Richard H. Thaler

A question of increasing interest to researchers in a variety of fields is whether the biases found in judgment and decision-making research remain present in contexts in which experienced participants face strong economic incentives. To investigate this question, we analyze the decision making of National Football League teams during their annual player draft. This is a domain in which monetary stakes are exceedingly high and the opportunities for learning are rich. It is also a domain in which multiple psychological factors suggest that teams may overvalue the chance to pick early in the draft. Using archival data on draft-day trades, player performance, and compensation, we compare the market value of draft picks with the surplus value to teams provided by the drafted players. We find that top draft picks are significantly overvalued in a manner that is inconsistent with rational expectations and efficient markets, and consistent with psychological research. This paper was accepted by Uri Gneezy, behav...


Psychological Science | 2008

Prescribed optimism: is it right to be wrong about the future?

David A. Armor; Cade Massey; Aaron M. Sackett

Personal predictions are often optimistically biased. This simple observation has troubling implications for psychologists, economists, and decision theorists concerned with rationality and the accuracy of self-knowledge (Armor & Taylor, 2002; Krizan & Windschitl, 2007; Sweeny, Carroll, & Shepperd, 2006). However, normative conclusions about the impropriety of optimistic bias rest on an untested assumption: that people desire to be accurate when making personal predictions. If people believe, rightly or wrongly, that unrealistic optimism has some value, then optimistic bias may be usefully understood as being consistent with peoples values and beliefs.


Journal of Experimental Psychology: General | 2012

Is Optimism Real

Joseph P. Simmons; Cade Massey

Is optimism real, or are optimistic forecasts just cheap talk? To help answer this question, we investigated whether optimistic predictions persist in the face of large incentives to be accurate. We asked National Football League football fans to predict the winner of a single game. Roughly half (the partisans) predicted a game involving their favorite team and the other half (the neutrals) predicted a game involving two teams they were neutral about. Participants were promised either a small incentive (


Journal of Economic Behavior and Organization | 2017

Small Cues Change Savings Choices

James J. Choi; Emily Haisley; Jennifer Kurkoski; Cade Massey

5) or a large incentive (


Management Science | 2016

Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them

Berkeley J. Dietvorst; Joseph P. Simmons; Cade Massey

50) for correctly predicting the game’s winner. Optimism emerged even when incentives were large, as partisans were much more likely than neutrals to predict partisans’ favorite teams to win. Strong optimism also emerged among participants whose responses to follow-up questions strongly suggested that they believed the predictions they made. This research supports the claim that optimism is real.


Archive | 2014

Learning to Detect Change

Ye Li; Cade Massey; George Wu

In randomized field experiments, we embedded one- to two-sentence anchoring, goal-setting, or savings threshold cues in emails to employees about their 401(k) savings plan. We find that anchors increase or decrease 401(k) contribution rates by up to 1.4% of income. A high savings goal example raises contribution rates by up to 2.2% of income. Highlighting a higher savings threshold in the match incentive structure raises contributions by up to 1.5% of income relative to highlighting the lower threshold. Highlighting the maximum possible contribution rate raises contribution rates by up to 2.9% of income among low savers.


Archive | 2010

Optimism and Economic Crisis

Ron Kaniel; Cade Massey; David T. Robinson

Although evidence-based algorithms consistently outperform human forecasters, people often fail to use them after learning that they are imperfect, a phenomenon known as algorithm aversion. In this paper, we present three studies investigating how to reduce algorithm aversion. In incentivized forecasting tasks, participants chose between using their own forecasts or those of an algorithm that was built by experts. Participants were considerably more likely to choose to use an imperfect algorithm when they could modify its forecasts, and they performed better as a result. Notably, the preference for modifiable algorithms held even when participants were severely restricted in the modifications they could make (Studies 1-3). In fact, our results suggest that participants’ preference for modifiable algorithms was indicative of a desire for some control over the forecasting outcome, and not for a desire for greater control over the forecasting outcome, as participants’ preference for modifiable algorithms was relatively insensitive to the magnitude of the modifications they were able to make (Study 2). Additionally, we found that giving participants the freedom to modify an imperfect algorithm made them feel more satisfied with the forecasting process, more likely to believe that the algorithm was superior, and more likely to choose to use an algorithm to make subsequent forecasts (Study 3). This research suggests that one can reduce algorithm aversion by giving people some control - even a slight amount - over an imperfect algorithm’s forecast.

Collaboration


Dive into the Cade Massey's collaboration.

Top Co-Authors

Avatar

George Wu

University of Chicago

View shared research outputs
Top Co-Authors

Avatar

Joseph P. Simmons

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

David T. Robinson

National Bureau of Economic Research

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ron Kaniel

University of Rochester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David A. Armor

San Diego State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ye Li

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge