Roger Tourangeau
Westat
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Roger Tourangeau.
Psychological Bulletin | 2007
Roger Tourangeau; Ting Yan
Psychologists have worried about the distortions introduced into standardized personality measures by social desirability bias. Survey researchers have had similar concerns about the accuracy of survey reports about such topics as illicit drug use, abortion, and sexual behavior. The article reviews the research done by survey methodologists on reporting errors in surveys on sensitive topics, noting parallels and differences from the psychological literature on social desirability. The findings from the survey studies suggest that misreporting about sensitive topics is quite common and that it is largely situational. The extent of misreporting depends on whether the respondent has anything embarrassing to report and on design features of the survey. The survey evidence also indicates that misreporting on sensitive topics is a more or less motivated process in which respondents edit the information they report to avoid embarrassing themselves in the presence of an interviewer or to avoid repercussions from third parties.
Social Science Computer Review | 2004
Mick P. Couper; Roger Tourangeau; Frederick G. Conrad; Scott D. Crawford
Several alternative response formats are available to the web survey designer, but the choice of format is often made with little consideration of measurement error. The authors experimentally explore three common response formats used in web surveys: a series of radio buttons, a drop box with none of the options initially displayed until the respondent clicks on the box, and a scrollable drop box with some of the options initially visible, requiring the respondent to scroll to see the remainder of the options. The authors reversed the order of the response options for half the sample. The authors find evidence of response order effects but stronger evidence that visible response options are endorsed more frequently, suggesting that visibility may be a more powerful effect than primacy in web surveys. The results suggest that the response format used in web surveys does affect the choices made by respondents.
Social Science Computer Review | 2006
Mick P. Couper; Roger Tourangeau; Frederick G. Conrad; Eleanor Singer
The use of visual analog scales (VAS) in survey research has been relatively rare, in part because of operational difficulties. However web surveys permit the use of continuous input devices such as slider bars, making VAS more feasible. The authors conducted an experiment to explore the utility of a VAS in a web survey, comparing it to radio button input and numeric entry in a text box on a series of bipolar questions eliciting views on genetic versus environmental causes of various behaviors. The experiment included a variety of additional comparisons including the presence or absence of numeric feedback in theVAS, the use of a midpoint or no midpoint for the other two versions, and numbered versus unnumbered radio button scales. The response distributions for theVAS did not differ from those using the other scale types, and theVAS had higher rates of missing data and longer completion times.
human factors in computing systems | 2003
Roger Tourangeau; Mick P. Couper; Darby Miller Steiger
Abstract Social interface theory has had widespread influence within the field of human–computer interaction. The basic thesis is that humanizing cues in a computer interface can engender responses from users similar to those produced by interactions between humans. These humanizing cues often confer human characteristics on the interface (such as gender) or suggest that the interface is an agent actively interacting with the respondent. In contrast, the survey interviewing literature suggests that computer administration of surveys on highly sensitive topics reduces or eliminates social desirability effects, even when such humanizing features as recorded human voices are used. In attempting to reconcile these apparently contradictory findings, we varied features of the interface in two Web surveys and a telephone survey. In the first Web experiment, we presented an image of (1) a male researcher, (2) a female researcher, or (3) the study logo at several points throughout the questionnaire. This experiment also varied the extent of personal feedback provided to the respondent. The second Web study compared three versions of the survey: (1) One that included a photograph of a female researcher and text messages from her; (2) another version that included only the text messages; and (3) a final version that included neither the picture nor the personalizing messages. Finally, we carried out a telephone study using a method—interactive voice response (IVR)—in which the computer plays a recording of the questions over the telephone and respondents indicate their answers by pressing keys on the telephone handset. The IVR study varied the voice that administered the questions. All three surveys used questionnaires that included sensitive questions about sexual behavior and illicit drug use and questions on gender-related attitudes. We find limited support for the social interface hypothesis. There are consistent, though small, effects of the “gender” of the interface on reported gender attitudes, but few effects on socially desirable responding. We propose several possible reasons for the contradictory evidence on social interfaces.
Journal of the Association of Environmental and Resource Economists, 2017, Vol.4(2), pp.319-405 [Peer Reviewed Journal] | 2017
Robert J. Johnston; Kevin J. Boyle; Wiktor L. Adamowicz; Jeffrey Bennett; Roy Brouwer; Trudy Ann Cameron; W. Michael Hanemann; Nick Hanley; Mandy Ryan; Riccardo Scarpa; Roger Tourangeau; Christian A. Vossler
This article proposes contemporary best-practice recommendations for stated preference (SP) studies used to inform decision making, grounded in the accumulated body of peer-reviewed literature. These recommendations consider the use of SP methods to estimate both use and non-use (passive-use) values, and cover the broad SP domain, including contingent valuation and discrete choice experiments. We focus on applications to public goods in the context of the environment and human health but also consider ways in which the proposed recommendations might apply to other common areas of application. The recommendations recognize that SP results may be used and reused (benefit transfers) by governmental agencies and nongovernmental organizations, and that all such applications must be considered. The intended result is a set of guidelines for SP studies that is more comprehensive than that of the original National Oceanic and Atmospheric Administration (NOAA) Blue Ribbon Panel on contingent valuation, is more germane to contemporary applications, and reflects the two decades of research since that time. We also distinguish between practices for which accumulated research is sufficient to support recommendations and those for which greater uncertainty remains. The goal of this article is to raise the quality of SP studies used to support decision making and promote research that will further enhance the practice of these studies worldwide.
Marketing Letters | 2002
Joffre Swait; Wiktor L. Adamowicz; Michael Hanemann; Adele Diederich; Jon A. Krosnick; David F. Layton; William Provencher; David A. Schkade; Roger Tourangeau
There is an emerging consensus among disciplines dealing with human decision making that the context in which a decision is made is an important determinant of outcomes. This consensus has been slow in the making because much of what is known about context effects has evolved from a desire to demonstrate the untenability of certain common assumptions upon which tractable models of behavior have generally been built. This paper seeks to␣bring disparate disciplinary perspectives to bear on the relation between context and choice, to formulate (1) recommendations for improvements to the state-of-the-practice of Random Utility Models (RUMs) of choice behavior, and (2) a future research agenda to guide the further incorporation of context into these models of choice behavior.
Public Opinion Quarterly | 2003
Mick P. Couper; Eleanor Singer; Roger Tourangeau
Dans les enquetes portant sur des questions sensibles (sexualite, toxicomanie etc.) les questionnaires auto-administres se presentent sous la forme de questionnaires papier traditionnels ou bien dans des versions audio assistees par ordinateur. Cet article examine les avantages respectifs de chacune de ces deux solutions, en fonction de la situation dadministration du questionnaire
Public Opinion Quarterly | 2001
Roger Tourangeau; Darby Miller Steiger; David C. Wilson
Over the past 25 years, computerization has swept over survey research, making computer-assisted data collection the de facto standard in the United States and Western Europe (Couper and Nicholls 1998). The move to computerization may now be ushering in a golden age for self-administered questions; the newest methods of survey data collection to emerge have reduced the role of the interviewer or eliminated it entirely, allowing the respondents to interact directly with the computer. The new modes of self-administered data collection include Web surveys and a technology variously referred to as interactive voice response (IVR), touchtone data entry (TDE), and telephone audio computer-assisted self-interviewing (T-ACASI). These different labels refer to the same data collection technology in which the computer plays a recording of the questions to the respondents over the telephone, who indicate their answers by pressing keys on the handsets (Appel, Tortora, and Sigman 1992; Blyth 1997; Frankovic 1994; Gribble et al. 2000; Harrell and Clayton 1991; Phipps and Tupek 1990; Turner et al. 1996b, 1998). We will refer to this method of data collection as IVR, the term used at Gallup and at most market research firms. Automated telephone systems for gathering nonsurvey information are now widespread (e.g., for catalog sales, airline reservations, banking, and so on)
Sociological Methods & Research | 2003
Roger Tourangeau; Eleanor Singer; Stanley Presser
The authors present the results from parallel experiments in two surveys about privacy attitudes. Concerned that the order of the questions might affect the answers, they systematically varied the order of some of the key questions in the questionnaire. In both surveys, four of five question order experiments produced significant effects on responses to the items involved, but the question order variables did not affect responses to attitude items that came later in the questionnaire or relations between the attitude items whose order the authors varied and background characteristics of the respondents. They were also able to determine for most respondents whether their household had mailed back its census questionnaire. Question order did not have a consistent impact on the correlations between the survey responses and actual census returns. These results suggest that question order effects may be common with conceptually related items but that their impact is generally local, affecting answers to the items themselves but not answers to later questions, correlations with respondent background characteristics, or relations to subsequent behaviors.
human factors in computing systems | 2001
Mick P. Couper; Roger Tourangeau; Darby Miller Steiger
Social interface theory has widespread influence in the field of human-computer interaction. The basic thesis is that humanizing cues in a computer interface can engender responses from users similar to human-human interaction. In contrast, the survey interviewing literature suggests that computer administration of surveys on highly sensitive topics reduces or eliminates social desirability effect, even when such humanizing features as voice are used. In attempting to reconcile these apparently contradictory findings, we varied features of the interface in a Web survey (n=3047). In one treatment, we presented an image of 1) a male researcher, 2) a female researcher, or 3) the study logo at several points. In another, we varied the extent of personal feedback provided. We find little support for the soical interface hypothesis. We describe our study and discuss possible reasons for the contradictory evidence on social interfaces