Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Frauke Kreuter is active.

Publication


Featured researches published by Frauke Kreuter.


Sociological Methods & Research | 2011

The Effects of Asking Filter Questions in Interleafed Versus Grouped Format

Frauke Kreuter; Susan McCulloch; Stanley Presser; Roger Tourangeau

When filter questions are asked to determine respondent eligibility for follow-up items, they are administered either interleafed (follow-up items immediately after the relevant filter) or grouped (follow-up items after multiple filters). Experiments with mental health items have found the interleafed form produces fewer yeses to later filters than the grouped form. Given the sensitivity of mental health, it is unclear whether this is due to respondent desire to avoid sensitive issues or simply the desire to shorten the interview. The absence of validation data in these studies also means the nature of the measurement error associated with the filter types is unknown. We conducted an experiment using mainly nonsensitive topics of varying cognitive burden with a sample that allowed validation of some items. Filter format generally had an effect, which grew as the number of filters increased and was larger when the follow-up questions were more difficult. Surprisingly, there was no evidence that measurement error for filters was reduced in the grouped version; moreover, missing data for follow-up items was increased in that version.


Journal of Official Statistics | 2013

“Interviewer” Effects in Face-to-Face Surveys: A Function of Sampling, Measurement Error, or Nonresponse?

Brady T. West; Frauke Kreuter; Ursula Jaenichen

Abstract Recent research has attempted to examine the proportion of interviewer variance that is due to interviewers systematically varying in their success in obtaining cooperation from respondents with varying characteristics (i.e., nonresponse error variance), rather than variance among interviewers in systematic measurement difficulties (i.e., measurement error variance) - that is, whether correlated responses within interviewers arise due to variance among interviewers in the pools of respondents recruited, or variance in interviewer-specific mean response biases. Unfortunately, work to date has only considered data from a CATI survey, and thus suffers from two limitations: Interviewer effects are commonly much smaller in CATI surveys, and, more importantly, sample units are often contacted by several CATI interviewers before a final outcome (response or final refusal) is achieved. The latter introduces difficulties in assigning nonrespondents to interviewers, and thus interviewer variance components are only estimable under strong assumptions. This study aims to replicate this initial work, analyzing data from a national CAPI survey in Germany where CAPI interviewers were responsible for working a fixed subset of cases.


Sociological Methods & Research | 2011

Multiple auxiliary variables in nonresponse adjustment

Frauke Kreuter; Kristen Olson

Prior work has shown that effective survey nonresponse adjustment variables should be highly correlated with both the propensity to respond to a survey and the survey variables of interest. In practice, propensity models are often used for nonresponse adjustment with multiple auxiliary variables as predictors. These auxiliary variables may be positively or negatively associated with survey participation, they may be correlated with each other, and can have positive or negative relationships with the survey variables. Yet the consequences for nonresponse adjustment of these conditions are not known to survey practitioners. Simulations are used here to examine the effects of multiple auxiliary variables with opposite relationships with survey participation and the survey variables. The results show that bias and mean square error of adjusted respondent means are substantially different when the predictors have relationships of the same directions compared to when they have opposite directions with either propensity or the survey variables. Implications for nonresponse adjustment and responsive designs will be discussed.


Annals of The American Academy of Political and Social Science | 2013

Facing the Nonresponse Challenge

Frauke Kreuter

This article provides a brief overview of key trends in the survey research to address the nonresponse challenge. Noteworthy are efforts to develop new quality measures and to combine several data sources to enhance either the data collection process or the quality of resulting survey estimates. Mixtures of survey data collection modes and less burdensome survey designs are additional steps taken by survey researchers to address nonresponse.


Sociological Methods & Research | 2013

Undercoverage Rates and Undercoverage Bias in Traditional Housing Unit Listing

Stephanie Eckman; Frauke Kreuter

Many face-to-face surveys use field staff to create lists of housing units from which samples are selected. However, housing unit listing is vulnerable to errors of undercoverage: Some housing units are missed and have no chance to be selected. Such errors are not routinely measured and documented in survey reports. This study jointly investigates the rate of undercoverage, the correlates of undercoverage, and the bias in survey data due to undercoverage in listed housing unit frames. Working with the National Survey of Family Growth, we estimate an undercoverage rate for traditional listing efforts of 13.6 percent. We find that multiunit status, rural areas, and map difficulties strongly correlate with undercoverage. We find significant bias in estimates of variables such as birth control use, pregnancies, and income. The results have important implications for users of data from surveys based on traditionally listed housing unit frames.


Archive | 2014

Extracting information from big data: issues of measurement, inference and linkage

Frauke Kreuter; Roger D. Peng

Introduction Big data pose several interesting and new challenges to statisticians and others who want to extract information from data. As Groves pointedly commented, the era is “appropriately called Big Data as opposed to Big Information,” because there is a lot of work for analysts before information can be gained from “auxiliary traces of some process that is going on in the society.” The analytic challenges most often discussed are those related to three of the Vs that are used to characterize big data. The volume of truly massive data requires expansion of processing techniques that match modern hardware infrastructure, cloud computing with appropriate optimization mechanisms, and re-engineering of storage systems. The velocity of the data calls for algorithms that allow learning and updating on a continuous basis, and of course the computing infrastructure to do so. Finally, the variety of the data structures requires statistical methods that more easily allow for the combination of different data types collected at different levels, sometimes with a temporal and geographic structure. However, when it comes to privacy and confidentiality , the challenges of extracting (meaningful) information from big data are in our view similar to those associated with data of much smaller size, surveys being one example. For any statistician or quantitative working (social) scientist there are two main concerns when extracting information from data, which we summarize here as concerns about measurement and concerns about inference. Both of these aspects can be implicated by privacy and confidentiality concerns.


Archive | 2006

Schüler als Informanten? Die Qualität von Schülerangaben zum sozialen Hintergrund

Kai Maaz; Frauke Kreuter; Rainer Watermann

Die Analyse sozialer Disparitaten der Bildungsbeteiligung und des Kompetenzerwerbs setzt die differenzierte und valide Erfassung von Merkmalen des sozialen Hintergrunds voraus. Mit den theoretischen Konzepten des kulturellen und sozialen Kapitals sowie der soziookonomischen Stellung liegt in PISA ein theoretischer Rahmen zur Operationalisierung des sozialen Hintergrunds vor, der in vielen Forschungskontexten Anwendung gefunden hat und mittlerweile einen internationalen Standard fur die Erhebung sozialer Hintergrundmerkmale und die Analyse sozialer Disparitaten darstellt. Bei der empirischen Analyse sozialer Disparitaten darf jedoch nicht ubersehen werden, dass die Erhebung dieser Merkmale mit Messproblemen behaftet sein kann.


Archive | 2017

Total survey error in practice

Paul P. Biemer; Edith D. de Leeuw; Stephanie Eckman; Brad Edwards; Frauke Kreuter; Lars E. Lyberg; N. Clyde Tucker; Brady T. West

This book provides an overview of the TSE framework and current TSE research as related to survey design, data collection, estimation, and analysis. It recognizes that survey data affects many public policy and business decisions and thus focuses on the framework for understanding and improving survey data quality. The book also addresses issues with data quality in official statistics and in social, opinion, and market research as these fields continue to evolve, leading to larger and messier data sets. This perspective challenges survey organizations to find ways to collect and process data more efficiently without sacrificing quality. The volume consists of the most up-to-date research and reporting from over 70 contributors representing the best academics and researchers from a range of fields. The chapters are broken out into five main sections: The Concept of TSE and the TSE Paradigm, Implications for Survey Design, Data Collection and Data Processing Applications, Evaluation and Improvement, and Estimation and Analysis. Each chapter introduces and examines multiple error sources, such as sampling error, measurement error, and nonresponse error, which often offer the greatest risks to data quality, while also encouraging readers not to lose sight of the less commonly studied error sources, such as coverage error, processing error, and specification error. The book also notes the relationships between errors and the ways in which efforts to reduce one type can increase another, resulting in an estimate with larger total error.


Sociological Methods & Research | 2014

A Note on Mechanisms Leading to Lower Data Quality of Late or Reluctant Respondents

Frauke Kreuter; Gerrit Müller; Mark Trappmann

Survey methodologists worry about trade-offs between nonresponse and measurement error. Past findings indicate that respondents brought into the survey late provide low-quality data. The diminished data quality is often attributed to lack of motivation. Quality is often measured through internal indicators and rarely through true scores. Using administrative data for validation purposes, this article documents increased measurement error as a function of recruitment effort for a large-scale employment survey in Germany. In this case study, the reduction in measurement quality of an important target variable is largely caused by differential measurement error in subpopulations and respective shifts in sample composition, as well as increased cognitive burden through the increased length of recall periods among later respondents. Only small portions of the relationship could be attributed to a lack of motivation among late or reluctant respondents.


Archive | 2016

Big Data and Social Science: A Practical Guide to Methods and Tools

Ian Foster; Rayid Ghani; Ron S. Jarmin; Frauke Kreuter; Julia Lane

Both Traditional Students and Working Professionals Acquire the Skills to Analyze Social Problems. Big Data and Social Science: A Practical Guide to Methods and Tools shows how to apply data science to real-world problems in both research and the practice. The book provides practical guidance on combining methods and tools from computer science, statistics, and social science. This concrete approach is illustrated throughout using an important national problem, the quantitative study of innovation. The text draws on the expertise of prominent leaders in statistics, the social sciences, data science, and computer science to teach students how to use modern social science research principles as well as the best analytical and computational tools. It uses a real-world challenge to introduce how these tools are used to identify and capture appropriate data, apply data science models and tools to that data, and recognize and respond to data errors and limitations. For more information, including sample chapters and news, please visit the authors website.

Collaboration


Dive into the Frauke Kreuter's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rainer Schnell

University of Duisburg-Essen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark Trappmann

Institut für Arbeitsmarkt- und Berufsforschung

View shared research outputs
Researchain Logo
Decentralizing Knowledge