Thomas Salzberger
Vienna University of Economics and Business
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Thomas Salzberger.
Journal of Advertising | 2005
Michael T. Ewing; Thomas Salzberger; Rudolf R. Sinkovics
This paper offers a new methodological framework to guide researchers attempting to quantitatively assess how a pluralistic audience perceives a standardized television advertisement. Rasch (1960) measurement theory is introduced as an alternative to the more commonly employed multigroup confirmatory factor analysis (CFA) approach to assessing cross-cultural scalar equivalence. By analyzing a multicultural data set, we are able to make various inferences concerning the scalar equivalence of Schlingers confusion scale. The methodology reveals the limits of the scale, which in all probability would not have been detected using traditional approaches. For researchers attempting to develop new scales, or even to refine existing scales, strict adherence to established guidelines of item generation together with the application of the proposed methodology should ensure better results for both theorists and practitioners.
International Marketing Review | 2006
Thomas Salzberger; Rudolf R. Sinkovics
Purpose – The paper investigates the suitability of the Rasch model for establishing data equivalence. The results based on a real data set are contrasted with findings from standard procedures based on CFA methods.Design/methodology/approach – Sinkovics et al.s data on technophobia was used and re‐evaluated using both classical test theory (CTT) (multiple‐group structural equations modelling) and Rasch measurement theory.Findings – Data equivalence in particular and measurement in general cannot be addressed without reference to theory. While both procedures can be considered best practice approaches within their respective theoretical foundation of measurement, the Rasch model provides some theoretical virtues. Measurement derived from data that fit the Rasch model seems to be approximated by classical procedures reasonably well. However, the reverse is not necessarily true.Practical implications – The more widespread application of Rasch models would lead to a stronger justification of measurement, in...
Australasian Marketing Journal (amj) | 1999
Thomas Salzberger; Rudolf R. Sinkovics; Bodo B. Schlegelmilch
Given the increasing importance of international business, cross-cultural research becomes more and more relevant to marketing academics and practitioners. This paper illustrates the difficulties in achieving equivalence when conducting marketing research across borders. It opens with a general typology of equivalence issues in cross-cultural research and, subsequently, focuses specifically on data equivalence. Recent studies either disregard data equivalence at all or they predominantly suggest the use of simultaneous confirmatory factor analysis (CFA) for establishing data equivalence. The Latent Trait Theory (LTT), based on a different measurement paradigm, offers an alternative which promises to overcome many of the problems inherent in CFA. This paper contrasts the advantages and disadvantages of both approaches and illustrates their application by means of a simulated data set. In conclusion, a call is made for the incorporation of equivalence issues in scale development by quantitative as well as qualitative analyses.
Pain Practice | 2012
Birgit Prodinger; Thomas Salzberger; Gerold Stucki; Tanja Stamm; Alarcos Cieza
Objectives: Instruments to assess functioning in patients with FM vary considerably in their content and are often symptom‐specific. This study aimed to examine whether it is feasible to construct a psychometric‐sound clinical instrument to measure functioning in FM based on the Brief ICF‐Core‐Set for chronic widespread pain (CWP).
Frontiers in Psychology | 2013
Thomas Salzberger
Measures of psychological attributes abound in the social sciences as much as measures of physical properties do in the physical sciences. However, there are crucial differences between the scientific underpinning of measurement. While measurement in the physical sciences is supported by empirical evidence that demonstrates the quantitative nature of the property assessed, measurement in the social sciences is, in large part, made possible only by a vague, discretionary definition of measurement that places hardly any restrictions on empirical data. Traditional psychometric analyses fail to address the requirements of measurement as defined more rigorously in the physical sciences. The construct definitions do not allow for testable predictions; and content validity becomes a matter of highly subjective judgment. In order to improve measurement of psychological attributes, it is suggested to, first, readopt the definition of measurement in the physical sciences; second, to devise an elaborate theory of the construct to be measured that includes the hypothesis of a quantitative attribute; and third, to test the data for the structure implied by the hypothesis of quantity as well as predictions derived from the theory of the construct.
European Journal of Marketing | 2016
Thomas Salzberger; Marko Sarstedt; Adamantios Diamantopoulos
Purpose This paper aims to critically comment Rossiter’s “How to use C-OAR-SE to design optimal standard measures” in the current issue of EJM and provides a broader perspective on Rossiter’s C-OAR-SE framework and measurement practice in marketing in general. Design/methodology/approach The paper is conceptual, based on interpretation of measurement theory. Findings The paper shows that, at best, Rossiter’s mathematical dismissal of convergent validity applies to the completely hypothetical (and highly unlikely) situation where a perfect measure without any error would be available. Further considerations cast serious doubt on the appropriateness of Rossiter’s concrete object, dual subattribute-based single item measures. Being immunized against any piece of empirical evidence, C-OAR-SE cannot be considered a scientific theory and is bound to perpetuate, if not aggravate, the fundamental flaws in current measurement practice. While C-OAR-SE indeed helps generate more content valid instruments, the procedure offers no insights as to whether these instruments work properly to be used in research and practice. Practical implications This paper concludes that great caution needs to be exercised before adapting measurement instruments based on the C-OAR-SE procedure, and statistical evidence remains essential for validity assessment. Originality/value This paper identifies several serious conceptual and operational problems in Rossiter’s C-OAR-SE procedure and discusses how to align measurement in the social sciences to be compatible with the definition of measurement in the physical sciences.
Measurement: Interdisciplinary Research & Perspective | 2011
Thomas Salzberger
Compared to traditional test theory, where person measures are typically referenced to the distribution of a population, item response theory allows for a much more meaningful interpretation of measures as they can be directly compared to item locations. However, Stephen Humphry shows that the crucial role of the unit of measurement has been carelessly handled in the social sciences. While, today, in the natural sciences, proper, and meaningful, measurement is inconceivable without establishing a universal unit first, measurement in the social sciences has curiously done without a standard magnitude of whatever has allegedly been quantified. The differences between item response theory following Lord and Novick (1968) and Birnbaum (1968) and Rasch modeling based on Rasch (1980) could hardly be more fundamental in philosophical terms. While item response theory seeks to accommodate the data, and modeling item discrimination is an important tool in this respect, the Rasch model emphasizes invariance and specific objectivity (Rasch, 1977). Unequal item discrimination is incompatible with raw score sufficiency, which is absolutely essential for maintaining specific objectivity. Specific objectivity allows us to carry out invariant comparisons although we are confined to a particular frame of reference. It seems that Rasch modelers, practitioners, and theorists alike, have taken the requirement of equal item discrimination for granted as a sine qua non for proper measurement. Stephen Humphry demonstrates that this notion is actually incorrect and needs to be reconsidered. One reason why equal item discrimination has hardly ever been challenged in Rasch modeling is arguably due to the commonly used model formulation lacking an explicit account of item discrimination, which is implicitly fixed to one. In fact, this has misled even eminent experts in Rasch theory into believing that Rasch model parameters are difference scaled (Fischer, 1974) and, therefore, possess a “natural” unit. Another reason perhaps lies in the conception that dismissing equal item discrimination inevitably leads to the two-parameter logistic (2PL)
Archive | 2017
Stefan J. Cano; Thomas Salzberger
Perceived risk is a complex and important concept. There exists a particular challenge in terms of quantifying perceived risk across different products, and regarding risk to the individual (or the user) in general. Current approaches to measuring risk perceptions are typically limited in terms of instrument content, type, and comparability. Furthermore, many existing instruments have not been constructed with recourse to the latest developments in psychometrics. This chapter briefly outlines the key concepts related to risk perception, highlights existing research, and reviews current instruments developed for tobacco products. The authors then go onto describe core elements of instrument development and the fundamentals of the three main psychometric paradigms: Classical Test Theory, Rasch Measurement Theory, and Item Response Theory. It is concluded that there is a need for new self-report instruments to measure risk perceptions of tobacco products. In addition, whereas all psychometric theories provide useful insights, the framework of Rasch Measurement Theory appears to be the most promising to enable the delivery of an instrument fit for purpose for high stakes decision-making.
International Marketing Review | 2015
Joshua Daniel Newton; Fiona Joy Newton; Thomas Salzberger; Michael T. Ewing
Purpose – Multiple environmental behaviors will need to be adopted if climate change is to be addressed, yet current environmental decision-making models explain the adoption of single behaviors only. The purpose of this paper is to address this issue by developing and evaluating a decision-making model that explains the co-adoption, or coaction, of multiple environmental behaviors. Design/methodology/approach – To test its cross-national utility, the model was assessed separately among online survey panel respondents from three countries: Australia (n=502), the UK (n=500), and the USA (n=501). In total, three environmental behaviors were examined: sourcing electricity from a green energy provider, purchasing green products, and public transport use. For each behavioral pair, participants were grouped according to whether they had enacted coaction (performed both behaviors), some action (performed either behavior), or no action (performed neither behavior). Findings – Irrespective of national sample and b...
Archive | 2009
Thomas Salzberger; Hartmut H. Holzmüller; Anne L. Souchon
Measures are comparable if and only if measurement equivalence has been demonstrated. Although comparability and equivalence of measures are sometimes used interchangeably, we advocate a subtle but important difference in meaning. Comparability implies that measures from one group can be compared with measures from another group. It is a property of the measures, which is given or not. In particular, comparability presumes valid measures within each group compared. Measurement equivalence, by contrast, refers to the way measures are derived and estimated. It is intrinsically tied to the underlying theory of measurement. Thus, measurement equivalence cannot be dealt with in isolation. Its assessment has to be incorporated into the theoretical framework of measurement. Measurement equivalence is closely connected to construct validity for it refers to the way manifest indicators are related to the latent variable, within a particular culture and across different cultures. From this it follows that equivalence cannot, or should not, be treated as a separate issue but as a constitutive element of validity. A discussion of measurement equivalence without addressing validity would be incomplete.