Public Opinion Quarterly | 2021

SENSITIVE QUESTIONS IN SURVEYS

 
 
 

Abstract


\n In research on sensitive questions in surveys, the item count technique (ICT) has gained increased attention in recent years as a means of counteracting the problem of misreporting, that is, the under- and over-reporting of socially undesirable and socially desirable behaviors or attitudes. The performance of ICT compared with conventional direct questioning (DQ) has been investigated in numerous experimental studies, yielding mixed evidence. This calls for a systematic review.\n For this purpose, the present article reports results from a comprehensive meta-analysis of experimental studies comparing ICT estimates of sensitive items to those obtained via DQ. In total, 89 research articles with 124 distinct samples and 303 effect estimates are analyzed. All studies rely on the “more (less) is better” assumption, meaning that higher (lower) estimates of negatively (positively) connoted traits or behaviors are considered more valid.\n The results show (1) a significantly positive pooled effect of ICT on the validity of survey responses compared with DQ; (2) a pronounced heterogeneity in study results, indicating uncertainty that ICT would work as intended in future studies; and (3) as meta-regression models indicate, the design and characteristics of studies, items, and ICT procedures affect the success of ICT. There is no evidence for an overestimation of the effect due to publication bias.\n Our conclusions are that ICT is generally a viable method for measuring sensitive topics in survey studies, but its reliability has to be improved to ensure a more stable performance.

Volume None
Pages None
DOI 10.1093/poq/nfab002
Language English
Journal Public Opinion Quarterly

Full Text