Koen Beullens
Katholieke Universiteit Leuven
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Koen Beullens.
Social Science Research | 2013
Geert Loosveldt; Koen Beullens
In surveys carried out by interviewers trained according to the key principle of standardized interviewing it is assumed that the interviewer has only limited impact on the time a respondent needs to answer questions. In the paper the effects of interviewers and respondent characteristics on interview speed are analyzed simultaneously by means of a three-level random coefficient model. Data from the fifth round of the European Social Survey (ESS) are used. In twelve participating countries (CAPI) timers were implemented at several places in the questionnaire. Based on this time information the interview speed (number of questions asked per minute) was measured for each respondent during five modules of the questionnaire. The results support most of the expectations concerning the effects of the respondent characteristics. However, the results also indicate that, for all countries, interviewers strongly determine the interview speed and that interview length is not a simple linear function of the number of questions in a questionnaire.
Journal of Official Statistics | 2017
Geert Loosveldt; Koen Beullens
Abstract In this article we examine the interviewer effects on different aspects of response styles, namely non-differentiation and straightlining, which in general refers to the tendency to provide the same answers to questions in a block of questions. According to research about response styles, the impact of the interviewer on this kind of response behavior is rare. Five blocks of items in the questionnaire in the sixth round of the European Social Survey (2012) are used in the analysis. These data also allow for an evaluation of the differences between countries in terms of non-differentiation and straightlining. Five different measurements of these aspects of response style are used in the analysis. To disentangle the impact of respondents and interviewers on these aspects of response style, a three-level random intercept model is specified. The results clearly show interviewer effects on the respondent’s tendency to select a response category that is the same as the response category for the previous item. In some countries the proportion of explained variance due to differences between interviewers is larger than the proportion of variance explained by the differences between respondents.
SAGE Open | 2014
Geert Loosveldt; Koen Beullens
It is generally accepted that interviewers have a considerable effect on survey response. The difference between response success and failure does not only affect the response rate, but can also influence the composition of the realized sample or respondent set, and consequently introduce nonresponse bias. To measure these two different aspects of the obtained sample, response propensities will be used. They have an aggregate mean and variance that can both be used to construct quality indicators for the obtained sample of respondents. As these propensities can also be measured on the interviewer level, this allows evaluation of the interviewer group and of the extent to which individual interviewers contribute to a biased respondent set. In this article, a procedure based on a multilevel model with random intercepts and random slopes is elaborated and illustrated. The results show that the procedure is informative to detect influential interviewers with an impact on nonresponse basis.
Social Change | 2018
Koen Beullens; Geert Loosveldt; Caroline Vandenplas; Ineke Stoop
Response rates are declining increasing the risk of nonresponse error. The reasons for this decline are multiple: the rise of online surveys, mobile phones, and information requests, societal changes, greater awareness of privacy issues, etc. To combat this decline, fieldwork efforts have become increasingly intensive: widespread use of respondent incentives, advance letters, and an increased number of contact attempts. In addition, complex fieldwork strategies such as adaptive call scheduling or responsive designs have been implemented. The additional efforts to counterbalance nonresponse complicate the measurement of the increased difficulty of contacting potential respondents and convincing them to cooperate. To observe developments in response rates we use the first seven rounds of the European Social Survey, a biennial face-to-face survey. Despite some changes to the fieldwork efforts in some countries (choice of survey agency, available sample frame, incentives, number of contact attempts), many characteristics have been stable: effective sample size, (contact and) survey mode, and questionnaire design. To control for the different country composition in different rounds, we use a multilevel model with countries as level 2 units and response rates in each country-year combination as level 1 units. The results show a declining trend, although only round 7 has a significant negative effect.
Quality Assurance in Education | 2018
Geert Loosveldt; Celine Wuyts; Koen Beullens
Purpose In survey methodology, it is well-known that interviewers can have an impact on the registered answers. This paper aims to focus on one type of interviewer effect that arises from the differences between interviewers in the systematic effects of each interviewer on the answers. In the first case, the authors evaluate interviewer effects on the measurement of alcohol consumption in European countries. The second case is about the interviewer effects on the respondents’ tendency to (non)differentiate their responses and the consequences of this response style for the correlation between variables. Design/methodology/approach The interviewer effects are evaluated by means of interviewer variance analysis. Because respondents are nested within interviewers, we can specify a two- or three-level random intercept model to calculate the proportion of variance explained by the interviewers. Data from the seventh round of the European Social Survey are used. Findings The results in the first case show that the substantive conclusions about the effect of gender and education on the alcohol measures continue to hold when interviewer effects are taken into account. The results of the second case make clear that interviewer effects on attitudinal questions are considerable. There is also a significant effect of the interviewers on the degree that respondents differentiate their responses. The results also illustrate that correlations between attitudinal variables are influenced. This also implies that the results of statistical procedures using a correlation or covariance matrix can be strongly influenced by the tendency to (non)differentiate and the interviewers’ impact on this tendency. Originality/value The results clearly demonstrate that there are considerable differences between countries concerning the impact of the interviewers on substantive variables. Cross-national differences are striking and the importance and necessity to evaluate interviewer effects in a cross-national survey becomes clear.
Archive | 2018
Vasja Vehovar; Koen Beullens
After decades of neglecting nonprobability sampling approaches (we still have no textbook on this widespread practice), in recent years there has finally been a breakthrough in academic attention to this approach. This includes formal professional acceptance (e.g. American Association for Public Opinion Research (AAPOR) code) and also increased scientific research attention. This chapter overviews the structure and trends of research conducted on response rates between 1990 and 2015. The overview indicates – mirroring the trend in existing published work – that nonresponse is rarely treated in a comprehensive and integrative manner. The chapter also highlights those research projects where the nonresponse rate, nonresponse bias, data quality and costs are examined simultaneously, using the European Social Survey as an example.
Journal of Official Statistics | 2017
Caroline Vandenplas; Geert Loosveldt; Koen Beullens
Abstract Adaptive and responsive survey designs rely on monitoring indicators based on paradata. This process can better inform fieldwork management if the indicators are paired with a benchmark, which relies on empirical information collected in the first phase of the fieldwork or, for repeated or longitudinal surveys, in previous rounds or waves. We propose the “fieldwork power” (fieldwork production per time unit) as an indicator for monitoring, and we simulate this for the European Social Survey (ESS) Round 7 in Belgium and in the Czech Republic. We operationalize the fieldwork power as the weekly number of completed interviews and of contacts, the ratio of the number of completed interviews to the number of contact attempts and to the number of refusals. We use a repeated measurement multilevel model, with surveys in the previous rounds of the European Social Survey as the macro level and the weekly fieldwork power as repeated measurements to create benchmarks. We also monitor effort and data quality metrics. The results show how problems in the fieldwork evolution can be detected by monitoring the fieldwork power and by comparing it with the benchmarks. The analysis also proves helpful regarding post-survey fieldwork evaluation, and links effort, productivity, and data quality.
International Statistical Review | 2012
Barry Schouten; Jelke Bethlehem; Koen Beullens; Øyvin Kleven; Geert Loosveldt; Annemieke Luiten; Katja Rutar; Natalie Shlomo; Chris J. Skinner
ASK. Research and Methods | 2009
Jaak Billiet; Hideko Matsuo; Koen Beullens; Vasja Vehovar
Archive | 2007
Hideko Matsuo; Katrien Symons; Koen Beullens; Jaak Billiet