Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Melanie Revilla is active.

Publication


Featured researches published by Melanie Revilla.


Sociological Methods & Research | 2014

Choosing the Number of Categories in Agree–Disagree Scales

Melanie Revilla; Willem E. Saris; Jon A. Krosnick

Although agree–disagree (AD) rating scales suffer from acquiescence response bias, entail enhanced cognitive burden, and yield data of lower quality, these scales remain popular with researchers due to practical considerations (e.g., ease of item preparation, speed of administration, and reduced administration costs). This article shows that if researchers want to use AD scales, they should offer 5 answer categories rather than 7 or 11, because the latter yield data of lower quality. This is shown using data from four multitrait-multimethod experiments implemented in the third round of the European Social Survey. The quality of items with different rating scale lengths were computed and compared.


Social Science Computer Review | 2015

What are the Links in a Web Survey Among Response Time, Quality, and Auto-Evaluation of the Efforts Done?

Melanie Revilla; Carlos Ochoa

Evaluating the quality of the data is a key preoccupation for researchers to be confident in their results. When web surveys are used, it seems even more crucial since the researchers have less control on the data collection process. However, they also have the possibility to collect some paradata that may help evaluating the quality. Using this paradata, it was noticed that some respondents of web panels are spending much less time than expected to complete the surveys. This creates worries about the quality of the data obtained. Nevertheless, not much is known about the link between response times (RTs) and quality. Therefore, the goal of this study is to look at the link between the RTs of respondents in an online survey and other more usual indicators of quality used in the literature: properly following an instructional manipulation check, coherence and precision of answers, absence of straight-lining, and so on. Besides, we are also interested in the link of RT and the quality indicators with respondents’ auto-evaluation of the efforts they did to answer the survey. Using a structural equation modeling approach that allows separating the structural and the measurement models and controlling for potential spurious effects, we find a significant relationship between RT and quality in the three countries studied. We also find a significant, but lower, relationship between RT and auto-evaluation. However, we did not find a significant link between auto-evaluation and quality.


Internet Research | 2016

Do online access panels need to adapt surveys for mobile devices

Melanie Revilla; Daniele Toninelli; Carlos Ochoa; Germán Loewe

Purpose Despite the quick spread of the use of mobile devices in survey participation, there is still little knowledge about the potentialities and challenges that arise from this increase. The purpose of this paper is to study how respondents’ preferences drive their choice of a certain device when participating in surveys. Furthermore, this paper evaluates the tolerance of participants when specifically asked to use mobile devices and carry out other specific tasks, such as taking photographs. Design/methodology/approach Data were collected by surveys in Spain, Portugal and Latin America by Netquest, an online fieldwork company. Findings Netquest panellists still mainly preferred to participate in surveys using personal computers. Nevertheless, the use of tablets and smartphones in surveys showed an increasing trend; more panellists would prefer mobile devices, if the questionnaires were adapted to them. Most respondents were not opposed to the idea of participating in tasks such as taking photographs or sharing GPS information. Research limitations/implications The research concerns an opt-in online panel that covers a specific area. For probability-based panels and other areas the findings may be different. Practical implications The findings show that online access panels need to adapt their surveys to mobile devices to satisfy the increasing demand from respondents. This will also allow new, and potentially very interesting data collection methods. Originality/value This study contributes to survey methodology with updated findings focusing on a currently underexplored area. Furthermore, it provides commercial online panels with useful information to determine their future strategies.


Structural Equation Modeling | 2013

The Split-Ballot Multitrait-Multimethod Approach: Implementation and Problems

Melanie Revilla; Willem E. Saris

Saris, Satorra, and Coenders (2004) proposed a new approach to estimate the quality of survey questions, combining the advantages of 2 existing approaches: the multitrait–multimethod (MTMM) and the split-ballot (SB) ones. Implemented in practice, this new approach led to frequent problems of nonconvergence and improper solutions. This article uses Monte Carlo simulations to understand why the SB-MTMM is working well in some cases but not in others. The number of SB groups is a crucial element: The 3-group design is performing better. However, the 2-group design can also perform well: The analyses suggest that the interaction between the absolute values of the correlations between the traits and the relative values of the different correlations between traits plays an important role.


Bulletin of Sociological Methodology/Bulletin de Méthodologie Sociologique | 2012

Impact of the Mode of Data Collection on the Quality of Answers to Survey Questions Depending on Respondent Characteristics

Melanie Revilla

The Internet is used more and more to conduct surveys. However, moving from traditional modes of data collection to the Internet may threaten the comparability of the data if the mode has an impact on the way respondents answer. In previous research, Revilla and Saris (2012) find similar average quality (defined as the product of reliability and validity) for several survey questions when asked in a face-to-face interview and when asked online. But does this mean that the mode of data collection does not have an impact on the quality? Or may it be that for some respondents the quality is higher for Web surveys whereas for others it is lower, such that on an average the quality for the complete sample is similar? Comparing the quality for different groups of respondents in a face-to-face and in a Web survey, no significant impact of the background characteristics, the mode and the interaction between them on the quality is found.


Telematics and Informatics | 2017

An experiment comparing grids and item-by-item formats in web surveys completed through PCs and smartphones

Melanie Revilla; Daniele Toninelli; Carlos Ochoa

The device used to answer web surveys should be taken into account.Similar levels of interitem correlations are found in PCs and smartphones.Longer completion times are found for grid questions for smartphone respondents.Sometimes less non-differentiation is observed for PCs.Using item-by-item format for all devices is a way to improve comparability. Some respondents already complete web surveys via mobile devices. These devices vary at several levels from PCs. In particular, we expect differences when grid questions are used due to the lower visibility on mobile devices and because in questionnaires optimized to be completed through smartphones, grids are split up into an item-by-item format.This paper reports the results of a two-wave experiment conducted in Spain in 2015, comparing three groups: PCs, smartphones not-optimized, or smartphones optimized.We found similar levels of interitem correlations, longer completion times for grid questions for smartphone respondents, and sometimes less non-differentiation for PCs. Thus, using the item-by-item format for smartphones and PCs seems the most appropriate way to improve comparability.


Social Science Computer Review | 2017

Using Passive Data From a Meter to Complement Survey Data in Order to Study Online Behavior

Melanie Revilla; Carlos Ochoa; Germán Loewe

Surveys have been used as main tool of data collection in many areas of research and for many years. However, the environment is changing increasingly quickly, creating new challenges and opportunities. This article argues that, in this new context, human memory limitations lead to inaccurate results when using surveys in order to study objective online behavior: People cannot recall everything they did. It therefore investigates the possibility of using, in addition to survey data, passive data from a tracking application (called a “meter”) installed on participants’ devices to register their online behavior. After evaluating the extent of some of the main drawbacks linked to passive data collection with a case study (Netquest metered panel in Spain), this article shows that the data from the web survey and the meter lead to very different results about the online behavior of the same sample of respondents, showing the need to combine several sources of data collection in the future.


Social Science Computer Review | 2016

What Is the Gain in a Probability-Based Online Panel of Providing Internet Access to Sampling Units Who Previously Had No Access?

Melanie Revilla; Anne Cornilleau; Stéphane Legleye; Pablo de Pedraza

The Internet is considered an attractive option for survey data collection. However, some people do not have access to it. One way to address this coverage problem for general population surveys is to draw a probabilistic sample and provide Internet access to the selected units who do not have it and accept to participate. This is what the knowledge panel and the Longitudinal Internet Studies for the Social sciences (LISS) panel do. However, a selection effect is still possible. Units without previous Internet access might refuse to participate in a web panel, even if provided with the necessary equipment. Thus, efforts to provide the necessary equipment may not be worth it. This article investigates the gain in terms of representativeness of offering the equipment to non-Internet units in a web panel using tablets: the French Longitudinal Internet Studies for the Social Sciences panel. We find that the number of non-Internet units who accept to participate is low. This is not only due to the fact that their response rates are lower but also to the small proportion of non-Internet units in the French population. In addition, they participate less in given surveys once they become panelists. At the same time, they are very different from the Internet units. Therefore, even if because of the small number of units, the overall gain in representativeness is small, there are a few important variables (e.g., education) on which their inclusion yields a more representative sample of the general population.


Structural Equation Modeling | 2014

Reassessing the Effect of Survey Characteristics on Common Method Bias in Emotional and Social Intelligence Competencies Assessment

Joan Manuel Batista-Foguet; Melanie Revilla; Willem E. Saris; Richard E. Boyatzis; Ricard Serlavós

Since the idea of method variance was inspired by D. T. Campbell and Fiske in 1959, many papers have demonstrated an ongoing debate about both its nature and impact. Often, method variance entails an upward bias in correlations among observed variables—common method bias. This article reports a split-ballot multitrait–multimethod experimental design for estimating 2 opposite biases: the upward biasing method variance from the reaction to the length of the response scale and the position of the survey items in the questionnaire and the downward biasing effect of poor data quality. The data are derived from self-reported behavior related to emotional and social competencies. This article illustrates a methodology to estimate common method bias and its components: common method scale variance, common method occasion variance, and the attenuation effect due to measurement errors. The results show that common method variance has a much smaller impact than random and systematic measurement errors. The results also corroborate previous findings: the greater reliability of longer scales and the lower reliability of items placed toward the end of the survey.


Social Science Computer Review | 2018

Comparing Grids With Vertical and Horizontal Item-by-Item Formats for PCs and Smartphones:

Melanie Revilla; Mick P. Couper

Much research has been done comparing grids and item-by-item formats. However, the results are mixed, and more research is needed especially when a significant proportion of respondents answer using smartphones. In this study, we implemented an experiment with seven groups (n = 1,476), varying the device used (PC or smartphone), the presentation of the questions (grids, item-by-item vertical, item-by-item horizontal), and, in the case of smartphones only, the visibility of the “next” button (always visible or only visible at the end of the page, after scrolling down). The survey was conducted by the Netquest online fieldwork company in Spain in 2016. We examined several outcomes for three sets of questions, which are related to respondent behavior (completion time, lost focus, answer changes, and screen orientation) and data quality (item missing data, nonsubstantive responses, instructional manipulation check failure, and nondifferentiation). The most striking difference found is for the placement of the next button in the smartphone item-by-item conditions: When the button is always visible, item missing data are substantially higher.

Collaboration


Dive into the Melanie Revilla's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wiebke Weber

Pompeu Fabra University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge