Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Natalja Menold is active.

Publication


Featured researches published by Natalja Menold.


Field Methods | 2014

How Do Respondents Attend to Verbal Labels in Rating Scales

Natalja Menold; Lars Kaczmirek; Timo Lenzner; Aleš Neusar

Two formats of labeling in rating scales are commonly used in questionnaires: verbal labels for end categories only (END form) and verbal labels for each of the categories (ALL form). We examine attention processes and respondents’ burden in using verbal labels in rating scales. Attention was tracked in a laboratory setting employing eye-tracking technology. The results of the two experiments are presented: One applied seven and the other applied five categories in rating scales comparing END and ALL forms (n = 47 in each experiment). The results show that the ALL form provides higher reliability, although the probability that respondents attend to a verbal label seems to decrease as the number of verbally labeled categories increases.


Global Qualitative Nursing Research | 2016

Methodological Aspects of Focus Groups in Health Research Results of Qualitative Interviews With Focus Group Moderators

Anja P. Tausch; Natalja Menold

Although focus groups are commonly used in health research to explore the perspectives of patients or health care professionals, few studies consider methodological aspects in this specific context. For this reason, we interviewed nine researchers who had conducted focus groups in the context of a project devoted to the development of an electronic personal health record. We performed qualitative content analysis on the interview data relating to recruitment, communication between the focus group participants, and appraisal of the focus group method. The interview data revealed aspects of the focus group method that are particularly relevant for health research and that should be considered in that context. They include, for example, the preferability of face-to-face recruitment, the necessity to allow participants in patient groups sufficient time to introduce themselves, and the use of methods such as participant-generated cards and prioritization.


Social Science Computer Review | 2015

The Influence of the Answer Box Size on Item Nonresponse to Open-Ended Questions in a Web Survey

Cornelia Zuell; Natalja Menold; Sabine Körber

This article investigates item nonresponse in open-ended survey questions because such item nonresponse is much higher than in closed questions. The difference is a result of the higher cognitive burden placed on the respondent. To study item nonresponse, we manipulate different questionnaire design characteristics, such as the size of the answer box and the inclusion of motivation texts, as well as respondent-specific characteristics, in a randomized web experiment using a student sample. The results show that a motivation text increases the frequency of responses to open-ended questions for both small and large answer boxes. However, large answer boxes earn higher item nonresponse than small answer boxes regardless of the usage of a motivation text. In addition, gender and the respondent’s field of study affected the answering of open-ended questions; being a woman or studying social sciences increased the frequency of a response. As the major finding and in contrast to previous findings, our results indicate that particularly large answer boxes should be avoided, because they reduce respondents’ willingness to respond.


Sociological Methods & Research | 2016

Measurement of Latent Variables With Different Rating Scales Testing Reliability and Measurement Equivalence by Varying the Verbalization and Number of Categories

Natalja Menold; Anja P. Tausch

Effects of rating scale forms on cross-sectional reliability and measurement equivalence were investigated. A randomized experimental design was implemented, varying category labels and number of categories. The participants were 800 students at two German universities. In contrast to previous research, reliability assessment method was used, which relies on the congeneric measurement model. The experimental manipulation had differential effects on the reliability scores and measurement equivalence. Attitude strength seems to be a relevant moderator variable, which influences measurement equivalence. Overall, the results show that measurement quality is influenced by rating scale forms. Results are discussed in terms of their implications for latent variables measurement.


Statistical journal of the IAOS | 2016

Validation of theoretical assumptions with real and falsified survey data

Uta Landrock; Natalja Menold

Falsification of survey data in face-to-face surveys has been intensively discussed in the literature. The results about the impact of falsifications on survey data are equivocal. While some authors report a strong impact, others find only little differences between real and falsified data. We argue that the impact of falsifications cannot be neglected, particularly when theory-driven analyses are conducted and not ad hoc analyses. The latter reproduce stereotypes used by both, researchers and falsifiers. To test this assumption we compare the results of multivariate regression analyses with real and falsified data by using a) theory- driven predictors and b) ad hoc predictors. As an example of theory-driven analyses we used the theory of planned behavior (TPB) for predicting self-reported healthy eating behavior. As ad hoc predictors we included socio-demographic information about the respondents known to the falsifiers as well as variables, which are indicated by everyday theories. The results show that theory-driven relationships were more strongly pronounced in the real data. In contrast, stereotypical and non-theory-driven relationships were more strongly pronounced in the falsified data. The results provide insights in the area of social cognition when predicting the behavior of others.


Educational and Psychological Measurement | 2018

Studying Latent Criterion Validity for Complex Structure Measuring Instruments Using Latent Variable Modeling

Tenko Raykov; Natalja Menold; George A. Marcoulides

Validity coefficients for multicomponent measuring instruments are known to be affected by measurement error that attenuates them, affects associated standard errors, and influences results of statistical tests with respect to population parameter values. To account for measurement error, a latent variable modeling approach is discussed that allows point and interval estimation of the relationship of an underlying latent factor to a criterion variable in a setting that is more general than the commonly considered homogeneous psychometric test case. The method is particularly helpful in validity studies for scales with a second-order factorial structure, by allowing evaluation of the relationship between the second-order factor and a criterion variable. The procedure is similarly useful in studies of discriminant, convergent, concurrent, and predictive validity of measuring instruments with complex latent structure, and is readily applicable when measuring interrelated traits that share a common variance source. The outlined approach is illustrated using data from an authoritarianism study.


Educational and Psychological Measurement | 2016

Can Reliability of Multiple Component Measuring Instruments Depend on Response Option Presentation Mode

Natalja Menold; Tenko Raykov

This article examines the possible dependency of composite reliability on presentation format of the elements of a multi-item measuring instrument. Using empirical data and a recent method for interval estimation of group differences in reliability, we demonstrate that the reliability of an instrument need not be the same when polarity of the response options for its individual components differs across administrations of the instrument. Implications for empirical educational, behavioral, and social research are discussed.


Structural Equation Modeling | 2018

Revisiting the Bi-Factor Model: Can Mixture Modeling Help Assess Its Applicability?

Tenko Raykov; George A. Marcoulides; Natalja Menold; Michael Harrison

This article revisits from the perspective of finite mixture modeling the increasingly popular bi-factor model applied in contemporary behavioral and social research. It is pointed out that in a population with substantial unobserved heterogeneity resulting from a mixture of latent classes, and where the unidimensional model holds along with models that markedly differ from the bi-factor model, the latter may turn out to be spuriously plausible. To raise caution about this possibility, an example of a 3-class setting is provided, where correspondingly (a) the single (global) factor model, (b) a model with a global factor and a single local factor, and (c) a model with a global factor and two local factors hold, while the bi-factor model with a global factor and three local factors is also plausible for the analyzed data overall. Examination of population heterogeneity prior to testing the bi-factor model is therefore recommendable in empirical research, in order to avoid spurious findings of its plausibility when ignoring substantial unobserved heterogeneity in studied populations.


Structural Equation Modeling | 2018

On Examining Intervention Effects Upon Ability Development Using Latent Variable Modeling

Tenko Raykov; George A. Marcoulides; Natalja Menold; Tatyana Li; Mingcai Zhang

A procedure for evaluating intervention effects on ability development is discussed. The method does not postulate the traditional and routinely utilized version of full measurement invariance over time for all measures, while accounting for possible latent structure changes. The approach permits examining growth or decline free from practice effects and natural development in previously and newly emerging subabilities pertaining to a studied general ability construct. The procedure is based on an application of latent variable modeling and can be straightforwardly employed with popular software. The outlined method is illustrated using numerical data in a two-group intervention setting.


Mathematical Population Studies | 2018

Design aspects of rating scales in questionnaires

Natalja Menold; Christof Wolf; Kathrin Bogner

Since their introduction by Thurstone (1929) and Likert (1932), rating scales have been determinant in questionnaires. A rating scale usually defines the graduations out of a continuum such as agreement, intensity, frequency, or satisfaction. Respondents evaluate questions and items by marking the appropriate category, which usually concerns personal characteristics, opinions, and behavior Parducci (1983) defines responses as functions of the continuum of a rating scale. They range between the end poles and depend on the graduation of the scale; however, their quality should not be influenced by the characteristics of the rating scale. Menold and Bogner (2016) review characteristics of rating scales as follows: total number of categories, usage of middle and “do not know” options, category labeling, scale orientation (starting with a negative or a positive value, or a lower or a higher value), scale polarity (usage of verbal opposites), and visual presentation. The best design of rating scales remains controversial. Moreover, characteristics of rating scales can affect the quality of measurement (Krosnick and Fabrigar, 1997). Menold and Tausch (2016) demonstrate that different total numbers of categories have different psychometric properties and that verbalisation affects measurement. Data can no longer be compared if they have been produced from different rating scales. Graduations used in rating scales involve metric properties, because they are supposed to correspond to equal differences between categories. Orth (1982) and Westermann (1985) criticized this assumption of equidistance. The socalled “visual design” (Christian and Dillman, 2004; Tourangeau et al., 2007) was introduced to make out the influence of the graphical presentation of rating scales on response. Responses could, conciously or not, be biased by the graphical features. According to Schaefer and Dykema (2011: 912), “although past research often allows us to predict how a marginal distribution will be affected ... we are too often unable to say which version of a question is more reliable or valid.” That is why the reliability and validity of rating scales constitute an issue. This special

Collaboration


Dive into the Natalja Menold's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tenko Raykov

Michigan State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stefanie Eifler

Catholic University of Eichstätt-Ingolstadt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tatyana Li

Michigan State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge