Matthew DeBell
Stanford University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Matthew DeBell.
Science | 2017
Richard C. Bishop; Kevin J. Boyle; Richard T. Carson; David S. Chapman; W. Michael Hanemann; Barbara Kanninen; Raymond J. Kopp; Jon A. Krosnick; John A. List; Norman Meade; Robert Paterson; Stanley Presser; V. Kerry Smith; Roger Tourangeau; Michael J. Welsh; Jeffrey M. Wooldridge; Matthew DeBell; Colleen Donovan; Matthew Konopka; Nora Scherer
Stated-preference research supports
Sociological Methods & Research | 2018
Matthew DeBell; Jon A. Krosnick; Katie Gera; David S. Yeager; Michael P. McDonald
17.2B in protections When large-scale accidents cause catastrophic damage to natural or cultural resources, government and industry are faced with the challenge of assessing the extent of damages and the magnitude of restoration that is warranted. Although market transactions for privately owned assets provide information about how valuable they are to the people involved, the public services of natural assets are not exchanged on markets; thus, efforts to learn about peoples values involve either untestable assumptions about how other things people do relate to these services or empirical estimates based on responses to stated-preference surveys. Valuation based on such surveys has been criticized because the respondents are not engaged in real transactions. Our research in the aftermath of the 2010 BP Deepwater Horizon oil spill addresses these criticisms using the first, nationally representative, stated-preference survey that tests whether responses are consistent with rational economic choices that are expected with real transactions. Our results confirm that the survey findings are consistent with economic decisions and would support investing at least
Archive | 2018
Matthew DeBell
17.2 billion to prevent such injuries in the future to the Gulf of Mexicos natural resources.
Archive | 2018
Matthew DeBell
Postelection surveys regularly overestimate voter turnout by 10 points or more. This article provides the first comprehensive documentation of the turnout gap in three major ongoing surveys (the General Social Survey, Current Population Survey, and American National Election Studies), evaluates explanations for it, interprets its significance, and suggests means to continue evaluating and improving survey measurements of turnout. Accuracy was greater in face-to-face than telephone interviews, consistent with the notion that the former mode engages more respondent effort with less social desirability bias. Accuracy was greater when respondents were asked about the most recent election, consistent with the hypothesis that forgetting creates errors. Question wordings designed to minimize source confusion and social desirability bias improved accuracy. Rates of reported turnout were lower with proxy reports than with self-reports, which may suggest greater accuracy of proxy reports. People who do not vote are less likely to participate in surveys than voters are.
Social Indicators Research | 2008
Matthew DeBell
The basic theoretical foundation for weighting survey data is not controversial. Weights account for each respondent’s selection or inclusion probability, and as such, weights say how many people each person who responds to the survey represents in the population. However, in practice weighting is done ad hoc, with little transparency or consistency. This chapter discusses the limitations of ad hoc weighting, advocates greater transparency, accessibility, and replicability, and describes methodological developments to make the computation of survey weights easier for more people to do using transparent and replicable methods.
Political Analysis | 2013
Matthew DeBell
Survey weights are critical for accurate generalizations from survey data to a population. Unfortunately, weights are commonly developed in ad hoc ways by researchers, leading to a serious risk of inappropriate or inconsistent methods being applied. Furthermore, the weight development methods used by researchers and organizations are rarely disclosed in ways that allow critical inspection by the end users. Using the rigorous and transparent survey weight development process undertaken by the investigators of the American National Election Studies (ANES) as a case study, this chapter highlights best practices for the development and publication of weights for survey data. Best practices emphasize transparent, replicable, and consistent methods.
International Journal of Behavioral Development | 2008
David S. Crystal; Miki Kakinuma; Matthew DeBell; Hiroshi Azuma; Takahiro Miyashita
Political Psychology | 2017
Matthew DeBell
Archive | 2011
Daniel Schneider; Matthew DeBell; Jon A. Krosnick
Natural Field Experiments | 2017
Richard C. Bishop; Kevin J. Boyle; Richard T. Carson; David S. Chapman; Matthew DeBell; Colleen Donovan; W. Michael Hanemann; Barbara Kanninen; Matthew Konopka; Raymond J. Kopp; Jon A. Krosnick; John A. List; Norman Meade; Robert Paterson; Stanley Presser; Nora Scherer; V. Kerry Smith; Roger Tourangeau; Michael J. Welsh; Jeffrey M. Wooldridge