Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Catherine A. Roster is active.

Publication


Featured researches published by Catherine A. Roster.


International Journal of Market Research | 2004

A comparison of response characteristics from web and telephone surveys

Catherine A. Roster; Robert Rogers; Gerald Albaum; Darin Klein

Increasingly, web surveys are being used to supplement telephone survey data and some predict internet methods will one day replace telephone interviews as the primary method for surveying general populations. Despite these trends, few studies have systematically compared response differences between the two methods. This article describes a study in which both telephone and web surveys were used to collect data on the corporate reputation of an international firm. Findings reveal significant differences in sample characteristics, response effects and overall costs. In addition to demographic differences, the web garnered a lower response rate, more item omissions, and produced more negative or neutral evaluations than did the telephone survey. Factor structure for the corporate reputation construct was simpler in the web-based data. Predictability of behavioural measures was essentially equivalent between the two modes; however, cost-per-contact was significantly lower in the web survey.


Journal of Business & Industrial Marketing | 2010

An exploratory study of attendee activities at a business trade show

Srinath Gopalakrishna; Catherine A. Roster; Shrihari Sridhar

Purpose – Although trade shows are a significant part of the B2B communications mix, academic research in the area is sparse. To successfully manage this medium, a careful understanding of attendee behavior on the trade show floor is necessary. Drawing from the rich literature on shopper typologies in retailing (which parallels the trade show atmosphere), this paper sets out to develop a set of attendee metrics that show organizers can track regularly.Design/methodology/approach – Through latent class clustering on unique attendee‐level data from a popular computer trade show, five segments of attendee activity are uncovered that differ along dimensions such as the attendees involvement and focus and the exhibitors booth size, booth accessibility, and product display.Findings – Significant heterogeneity is found in attendee activities on the show floor. There are interesting similarities and differences between the retail and B2B shopper. Implications for trade show organizers and exhibitors are discuss...


The Journal of Marketing Theory and Practice | 2007

Management of Marketing Research Projects: Does Delivery Method Matter Anymore in Survey Research?

Catherine A. Roster; Robert D. Rogers; George C. Hozier; Kenneth G. Baker; Gerald Albaum

This study compared both online and offline survey modes in a single study about peoples attitudes toward furniture shopping. Differences in sample characteristics were obtained between modes and between sample and population parameters. Overall, online and offline modes of survey delivery appear to be equally susceptible to population parameter biases except for gender. Online modes had lower response rates and higher item omission rates than offline modes. Online modes did not emerge as the lowest cost per respondent as hypothesized. Furthermore, results suggested that the quality of data obtained by online modes may be somewhat inferior to data collected by offline survey modes.


International Journal of Market Research | 2011

Visiting item non-responses in internet survey data collection

Gerald Albaum; James B. Wiley; Catherine A. Roster; Scott M. Smith

A widely used technique in internet surveys is ‘forced answering’, which requires respondents to enter an ‘appropriate’ response before they are allowed to proceed to the next survey question. Forced answering virtually eliminates sources of respondent error due to item non-response. However, using forced answering might cause respondents to opt out entirely or break off early in the survey, which would increase non-response error. It has been suggested that one way around this is to provide a ‘prefer not to answer’ (PNA) option if forced answering is used, which would allow respondents to continue without providing a response to each question. This study examines effects on item non-response rates of using forced answering and ‘prefer not to answer’ in internet surveys. Findings reveal that use of PNA is not a perfect substitute for leaving questions blank, which brings into question the equivalency of response options that allow internet survey respondents to bypass answering questions and quality versus quantity tradeoffs associated with internet survey design choices.


Archive | 2015

Topic Sensitivity: Implications for Web-Based Surveys

Gerald Albaum; Catherine A. Roster; Scott M. Smith

Use of Web surveys by academic and practitioner researchers in Marketing is increasing rapidly. This is largely due to the advantages that Web surveys have in terms of speed, cost, and efficiency of data collection over other modes of data collection. In addition, Web survey programs offer researchers a wide variety of design options that can reduce sources of respondent error that are typically high in other self-administered methods, such as acquiescence, extreme responding, and social desirability Miller (2006).


Asia Pacific Journal of Marketing and Logistics | 2014

Topic sensitivity and research design: effects on internet survey respondents' motives

Gerald Albaum; Catherine A. Roster; Scott M. Smith

Purpose – The purpose of this paper is to examine the effect of topic sensitivity and the research design techniques of forced answering (FA) (i.e. cannot proceed if leave an answer blank) and response options (use of “prefer not to answer” (PNA) option) on respondent motives for participating in an internet-based survey. Design/methodology/approach – Data were collected in a field experiment in Hong Kong using a 2×2×2 factorial design. Variables manipulated were topic sensitivity, use of FA, and response options. The dependent variables were eight specific motives which were obtained from responses to the survey participation inventory (SPI). Findings – Topic sensitivity has a significant influence on seven of the eight motives. The use of FA does not appear to affect motives. In contrast, the use of the response option “PNA” has a significant effect on all motives except “obligation”. The SPI appears to be a viable measure to the use with Hong Kong online panellists, and perhaps with other Asian and non...


The Journal of Marketing Theory and Practice | 2017

Effect of Topic Sensitivity on Online Survey Panelists’ Motivation and Data Quality

Catherine A. Roster; Gerald Albaum; Scott M. Smith

This research investigates the effect of topic sensitivity on panelists’ motivations and data quality. An Internet survey in which topic sensitivity varied (high, low) was conducted with panelists using the Survey Participation Inventory (SPI). A two-factor structure based on intrinsic versus extrinsic motivations was used to cluster respondents. A two-way factorial MANOVA between the sensitivity conditions and clusters assessed self-report data quality, completion time, extreme response style, and response dispersion. Panelists’ motivations decreased in the high sensitivity topic condition. However, extrinsic rewards appeared to fortify intrinsic motives without seriously compromising data quality for panelists asked to respond to sensitive questions.


Behavior Research Methods | 2018

Development and application of a self-report measure for assessing sensitive information disclosures across multiple modes

Matthew D. Pickard; D. Wilson; Catherine A. Roster

Building on the literature that approaches self-disclosure as a decision-making process, we proposed a self-reported Sensitive Information Disclosure (SID) measure and tested the measure’s reliability and validity in two studies across a variety of interview modes and settings. We used theory to identify potential dimensions of sensitive information disclosures, created potential scale items, performed two separate card sorts, and validated the resulting pool of items in two separate experiments. Participants answered the SID scale items following an interview involving sensitive information, potential risk, and after-disclosure vulnerability. Study 1 was a laboratory experiment conducted with 165 university students. Exploratory factor analysis results revealed a two-factor structure, Personal Discomfort and Revealing Personal Information. Study 2 replicated these procedures using confirmatory factor analysis to confirm the factor structure and demonstrate the scale’s reliability and validity, with a sample of 77 students and 275 participants from Amazon’s M-Turk. Together, these results demonstrate that the proposed 11-item SID scale has good convergent and discriminant validity as well as good reliability. A quasi-experimental application of the measure is illustrated using the substantive findings from Study 2. This research fills a gap in the literature by developing a topic-free scale to measure SID as a dependent variable. The ability to accurately measure sensitive information disclosure is an important and necessary step toward developing a more thorough understanding of how people feel and react when asked to provide personal information in diverse interview settings.


Archive | 2016

This Is Sensitive, Let Me Talk to an Avatar: A Structured Abstract

Catherine A. Roster; Matthew D. Pickard; Yixing Chen

Opportunities exist for marketers and other business researchers to connect one-on-one with consumers using avatar interviewers that may facilitate disclosure of sensitive and personal information, without the costs of personal interviewers. This research reports findings from an exploratory study designed to reveal factors that facilitate greater levels of self-disclosure for sensitive interview topics for human versus avatar interviewers. Results reveal conditions in which computer-generated avatars can increase self-disclosure for sensitive topics in personal interview situations as an alternative to costly data collection conducted by human interviewers.


Archive | 2015

Internet-Based Surveys: Methodological Issues

Gerald Albaum; Patrick L. Brockett; Linda L. Golden; Scott M. Smith; James B. Wiley; Vallen Han; Catherine A. Roster

Web-based, or internet, surveys are widely used in marketing research, and their use continues to grow. The reasons for this are partly because they provide a number of technological features that are designed to reduce common sources of respondent error that can impact data quality, and partly because compared to traditional self-administered methods they offer advantages in speed, cost, and efficiency of data collection. This session deals with selected methodological issues concerning Web surveys.

Collaboration


Dive into the Catherine A. Roster's collaboration.

Top Co-Authors

Avatar

Gerald Albaum

University of New Mexico

View shared research outputs
Top Co-Authors

Avatar

Scott M. Smith

Brigham Young University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert Rogers

University of New Mexico

View shared research outputs
Top Co-Authors

Avatar

Linda L. Golden

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lorenzo Lucianetti

University of Chieti-Pescara

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

D. Wilson

University of Oklahoma

View shared research outputs
Researchain Logo
Decentralizing Knowledge