Joe Kerkvliet
Oregon State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Joe Kerkvliet.
The American Economic Review | 2004
John A. List; Robert P. Berrens; Alok K. Bohara; Joe Kerkvliet
Benefit-cost analysis remains the central paradigm used throughout the public sector. A necessary condition underlying efficient benefitcost analysis is an accurate estimate of the total value of the nonmarketed good or service in question. While economists have long measured the benefits of private goods routinely bought and sold in the marketplace, a much more difficult task faces the practitioner interested in estimating the total benefits of increased air and water quality, for example. In such cases, policy makers rely on stated preference methods (contingent markets) to provide signals of value. Recently there has been a lively debate about whether, and to what extent, “hypothetical bias” permeates benefit estimation in contingent markets. This debate has proliferated among academics and practitioners over the past several decades, and continues to find its way into public disputes of damage assessment, development decisions, and discussions of optimal regulatory standards. This study extends the debate in a new direction by taking advantage of a unique opportunity we were provided at the University of Central Florida (UCF), where we were approached to spearhead a capital campaign at UCF to fund a new Center for Environmental Policy Analysis (CEPA). The experimental design, which includes valuation decisions from nearly 300 subjects randomly placed into one of six treatment cells, permits an examination of the comparative static effects of varying social isolation while holding the other important facets of the valuation instrument constant. Our baseline treatments ask two different groups of respondents to vote Yes or No on contributing
Journal of Economic Education | 1994
Joe Kerkvliet
20 to provide start-up capital for CEPA (one treatment hypothetical and one treatment actual). In these two baseline treatments, similar to many practical methods of contingent valuation (CV) exercises that are carried out in practice (e.g., in-person, mail, or telephone), it is important to recognize that only the experimenter can observe each individual’s response. In the third and fourth treatments, denoted Randomized Response, we again ask a hypothetical or actual question concerning a
Journal of Environmental Economics and Management | 2003
Christian A. Vossler; Joe Kerkvliet
20 contribution, but we relax the degree of social pressure by using a randomized response format, which via delinking the observed and voting response ensures the subject that her stated preferences are unidentifiable. These particular treatments resemble use of an anonymous ballot box approach to obtain individual values. Our final two treatments, labeled Peer Group, considerably decrease subject anonymity by randomly choosing 10 people to stand up and inform the group of their voting decision. These treatments bear resemblance to contingent surveys performed with small groups (or poorly controlled Web-based surveys). The experimental results are interesting. Consonant with some previous studies, we observe signs of hypothetical bias. More importantly, we find that the difference between hypothetical and actual voting decisions is of roughly the same magnitude as the difference between actual voting decisions across treatments that vary * List: AREC and Department of Economics, 2200 Symons Hall, University of Maryland, College Park, MD 20742, and National Bureau of Economic Research (e-mail: [email protected]); Berrens and Bohara: Department of Economics, University of New Mexico, 1915 Roma NE, Albuquerque, NM 87131 (e-mail: [email protected]; [email protected]); Kerkvliet: Department of Economics, 303 Ballard Extension Hall, Oregon State University, Corvallis, OR 97331 (e-mail: [email protected]). Three anonymous reviewers provided comments that improved the manuscript. Seminar participants at Cornell University, Harvard University, University of Arizona, University of Maryland, University of New Mexico, University of Pennsylvania, and the University of Colorado’s Environmental Economics Workshop also provided useful suggestions. Richard Carson, Nick Flores, Glenn Harrison, Michael McKee, Kerry Smith, and Laura Taylor provided important insights on earlier drafts of this manuscript. We thank Apinya Thumaphipol for research assistance and Dean Thomas Keon for allowing us to use CEPA as a public good in our experiment. Any errors remain our own. 1 We provide a more patient review of the various terms and the background of the debate in the next section. 2 The randomized response approach to asking sensitive survey questions was introduced by Stanley Warner (1965).
Journal of Economic Behavior and Organization | 2003
Christian A. Vossler; Joe Kerkvliet; Stephen Polasky; Olesya Gainutdinova
Logit model estimates suggest that between 25 and 42 percent of economics undergraduates have cheated on exams, with a heavy drinker who is a resident member of a fraternity or sorority being the most likely offender.
Journal of Financial and Quantitative Analysis | 1991
Joe Kerkvliet; Michael H. Moffett
Abstract This study pursues external validation of contingent valuation by comparing survey results with the voting outcome of a Corvallis, Oregon, referendum to fund a riverfront improvement project through increased property taxes. Survey respondents hypothetically make a voting decision—with no financial consequences—on the upcoming referendum. The survey sample consists of respondents verified to have voted in the election. We use available precinct-level election data to compare the proportion of “yes” survey and referendum votes as well as estimate voting models and mean willingness to pay (WTP) based on the two sets of data. We find that survey responses match the actual voting outcome and WTP estimates based on the two are not statistically different. Contrary to similar studies, our statistical results do not depend on re-coding the majority of “undecided” survey responses to “no.” Furthermore, such a re-coding of responses may be inappropriate for our data set .
Journal of Industrial Economics | 1991
Joe Kerkvliet
Abstract We ask whether respondents report the same decisions in non-binding surveys as they do in real elections. We study a Corvallis, OR referendum to raise open space funds and find survey and election compatibility pivots on the characterization of “undecided” responses. The survey and referendum percentage of “yes” votes match closely only if “undecided” responses are treated as “no”. We compare survey-based mean willingness to pay (WTP) estimates with election-based estimates. Household WTP averages US
Ecological Economics | 2000
Joe Kerkvliet; Clifford Nowell
48.89 using election results, while survey-based WTP averages US
The RAND Journal of Economics | 1986
Scott E. Atkinson; Joe Kerkvliet
75.43 excluding “undecided responses”, US
The Review of Economics and Statistics | 1989
Scott E. Atkinson; Joe Kerkvliet
49.67 treating “undecided responses” as “no”, and between US
Review of Development Economics | 2000
Chi-Chur Chao; Joe Kerkvliet; Eden S. H. Yu
49.96 and 80.05.