Public Opinion Quarterly | 2019

Mis)Measuring Sensitive Attitudes with the List ExperimentSolutions to List Experiment Breakdown in Kenya

 
 

Abstract


List experiments (LEs) are an increasingly popular survey research tool for measuring sensitive attitudes and behaviors. However, there is evidence that list experiments sometimes produce unreasonable estimates. Why do list experiments “fail,” and how can the performance of the list experiment be improved? Using evidence from Kenya, we hypothesize that the length and complexity of the LE format make them costlier for respondents to complete and thus prone to comprehension and reporting errors. First, we show that list experiments encounter difficulties with simple, nonsensitive lists about food consumption and daily activities: over 40 percent of respondents provide inconsistent responses between list experiment and direct question formats. These errors are concentrated among less numerate and less educated respondents, offering evidence that the errors are driven by the complexity and difficulty of list experiments. Second, we examine list experiments measuring attitudes about political violence. The standard list experiment reveals lower rates of support for political violence compared to simply asking directly about this sensitive attitude, which we interpret as list experiment breakdown. We evaluate two modifications to the list experiment designed to reduce its complexity: private tabulation and cartoon visual aids. Both modifications greatly enhance list experiment performance, especially among respondent subgroups where the standard procedure is most problematic. The paper makes two key contributions: (1) showing that techniques such as the list experiment, which have promise for reducing response bias, can introduce different forms of error associated with question complexity and difficulty; and (2) demonstrating the effectiveness of easy-to-implement solutions to the problem. Public Opinion Quarterly doi:10.1093/poq/nfz009 D ow naded rom http/academ ic.p.com /poq/advance-articleoi/10.1093/poq/nfz009/5525050 by Vaderbilt U niersity Lrary user on 02 uly 2019 Survey researchers are often concerned with measuring sensitive attitudes and behaviors, including support for political violence, experience with corruption, and racial attitudes. A major challenge for studying such topics with surveys is social desirability bias: many individuals do not want to reveal socially unacceptable or potentially illegal attitudes and behaviors. Scholars have developed a number of strategies for reducing sensitivity-driven measurement error. The list experiment—or “item count technique”—is one approach that is increasingly popular in political science and related disciplines. In this paper, we evaluate two modifications to standard list experiment procedures. The first allows respondents to privately tabulate the number of items in the list that apply, thereby aiding accurate response while creating additional assurance of privacy. The second modification adds visual aids, which is intended to reduce respondent error—particularly among respondents who find the instructions and demands of a list experiment challenging. List experiments (LEs) reduce survey error by asking respondents about sensitive issues indirectly: sensitive items are embedded in a list with several nonsensitive items, and participants are asked how many items they agree with or apply to them, but not which ones (see examples found in tables 3 and 4 later in this paper). This approach reduces the perceived costs/risks of answering honestly. However, enthusiasm surrounding the list experiment has drawn attention away from its potential limitations. The length and complexity of the question format make them prone to comprehension and reporting errors. Importantly, such errors may be concentrated among certain population subgroups—those without experience answering complex survey questions or those who most prevalently hold the sensitive attitude of interest. Unfortunately, identifying the extent to which these issues bias list experimental data is challenging because survey respondents’ “true” answers to sensitive questions are usually unknown (Simpser 2017). Nonetheless, LEs often break down in obvious ways: producing estimates that are lower than the direct question, or even nonsensical ones, such as negative estimates (Holbrook and Krosnick 2010). In that light, we are motivated by two questions: Why do list experiments sometimes “fail” or break down? How can the performance of the list experiment be improved? In this paper, we examine the LE and its ability to reduce survey error in Kenya, where we sought to measure public support for political violence. First, we investigate the performance of the LE using lists of simple, nonsensitive items about food consumption and daily activities. We show that the LE encounters difficulties with these simple and nonsensitive lists: over 40 percent of respondents provide inconsistent responses in LE versus direct question formats. These “failures” are concentrated among less numerate and less educated respondents, evidence that errors are driven by LE question complexity and difficulty. Second, we turn to list experiments designed to measure attitudes about political violence. We find that the standard LE estimates lower rates of Kramon and Weghorst Page 2 of 28 ow naded rom http/academ ic.p.com /poq/advance-articleoi/10.1093/poq/nfz009/5525050 by Vaderbilt U niersity Lrary user on 02 uly 2019 support for political violence than those obtained by asking directly. These underestimates are most pronounced among less educated participants and those who provided inconsistent responses in the nonsensitive LEs described above, evidence that technique difficulty is driving list experiment breakdown. Finally, we evaluate two low-cost, context-appropriate modifications to the list experiment designed to reduce the complexity of the technique. The first allows for private tabulation, and the second combines private tabulation with cartoon visual aids. We find that both modifications improve list experiment performance, including among the subgroups that had difficulty with the nonsensitive LE. This paper contributes to the literature on survey response bias in two ways. First, we show that indirect techniques such as the list experiment, which have promise for reducing response bias, can introduce different forms of error that are associated with question complexity and difficulty. This is important because the survey literature is populated with list experiments that perform well; we highlight limitations that might not be obvious from reading this published literature because of publication bias and the “file drawer problem.” Our aim is not to suggest that all LEs are problematic, but rather to draw attention to these limitations. Our second contribution is demonstrating that relatively easy-to-implement and low-cost modifications can greatly enhance the performance of the technique, especially among populations where the standard procedure is most problematic. Modifications designed to reduce item complexity and difficulty can be adapted by applied survey researchers working in a range of contexts. Measuring Sensitive Attitudes with the List Experiment Attitudes toward violence are emblematic of the challenges of studying sensitive topics. Support for political violence is subject to under-reporting biases because such violence is illegal and its approval is generally socially undesirable. Past research on violence has addressed sensitivity-driven measurement error by alleviating perceived costs/risks of answering truthfully. Strategies include asking about violent behavior indirectly (Humphreys and Weinstein 2006), administering sensitive survey modules separately from a larger survey (Scacco 2016), anticipating or controlling for enumerator ethnicity effects (Kasara 2013; Carlson 2014; Adida et al. 2016), or one of several experimental approaches: endorsement experiments (Blair et al. 2013; Lyall, Blair, and Imai 2013), randomized response technique (Blair, Imai, and Zhou 2015), or the list experiment. The list experiment is a promising alternative to direct questions, offering respondents greater secrecy for sensitive responses (e.g., Kuklinski, Cobb, and Gilens 1997; Corstange 2018; Gonzalez-Ocantos et al. 2011; Blair and Imai 2012; Glynn 2013). The LE presents a sensitive statement as one of many items of a list and asks respondents to identify how many total list items apply to them. Participants are randomly assigned to either a treatment list including the Solutions to List Experiment Breakdown Page 3 of 28 ow naded rom http/academ ic.p.com /poq/advance-articleoi/10.1093/poq/nfz009/5525050 by Vaderbilt U niersity Lrary user on 02 uly 2019 sensitive item or a control list that does not. Because the lists are otherwise identical, and assignment is randomized, the difference in means between treatment and control lists can be attributed to the sensitive item. If successfully implemented, the technique yields an estimate of the prevalence of the sensitive attitude. Two assumptions must be satisfied for LE estimates to be valid: “no-liars” and “no design effects” (Blair and Imai 2012). The first states that respondents “do not lie about the sensitive item” (Rosenfeld, Imai, and Shapiro 2016, 795). The second requires that adding the sensitive item to a list does not change the way respondents engage with control items. Lists are generally designed to avoid “floor” and “ceiling” effects, which undermine how the sensitive attitude is rendered undetectable (Glynn 2013). For single LEs, the estimated prevalence of the sensitive item is the difference-in-means between treatment and control groups (e.g., Blair and Imai 2012; Streb et al. 2008). For example, if the control group mean is 2 and the treatment group mean is 2.2, the estimate in the sample would be 20 percent. In the double list experiment design (DLE), which uses two sets of lists such that all respondents receive one control list and one treatment list, the est

Volume 83
Pages 236-263
DOI 10.1093/POQ/NFZ009
Language English
Journal Public Opinion Quarterly

Full Text