Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael F. Schober is active.

Publication


Featured researches published by Michael F. Schober.


Public Opinion Quarterly | 1997

Does Conversational Interviewing Reduce Survey Measurement Error

Michael F. Schober; Frederick G. Conrad

Standardized survey interviewing is widely advocated in order to reduce interviewer-related error, for example by F. L. Fowler and T. W. Mangione. But L. Suchman and B. Jordan argue that standardized wording may decrease response accuracy because it prevents the conversational flexibility that respondents need in order to understand questions as survey designers intended. The authors propose that the arguments for these competing positions - standardized versus flexible interviewing approaches - may be correct under different circumstances. In particular, both standardized and flexible interviewing should produce high levels of accuracy when respondents have no doubts about how concepts in a question map onto their circumstances. However, flexible interviewing should produce higher response accuracy in cases where respondents are unsure about these mappings. The authors demonstrate this in a laboratory experiment in which professional telephone interviewers, using either standardized or flexible interviewing techniques, asked respondents questions from three large government surveys. Respondents answered on the basis of fictional descriptions so that the authors could measure response accuracy. The two interviewing techniques led to virtually perfect accuracy when the concepts in the questions clearly mapped onto the fictional situations. When the mapping was less clear, flexible interviewing increased accuracy by almost 60 percent. This was true whether flexible respondents had requested help from interviewers or interviewers had intervened without being asked for help. But the improvement in accuracy came at a substantial cost - a large increase in interview duration. They propose that different circumstances may justify the use of either interviewing technique


Public Opinion Quarterly | 2000

Clarifying Question Meaning in a Household Telephone Survey

Frederick G. Conrad; Michael F. Schober

This study contrasts two interviewing techniques that reflect different tacit assumptions about communication. In one, strictly standardized interviewing, interviewers leave the interpretation of questions up to respondents. In the other, conversational interviewing, interviewers say whatever it takes to make sure that questions are interpreted uniformly and as intended. Respondents from a national sample were interviewed twice. Each time they were asked the same factual questions from ongoing government surveys, five about housing and five about recent purchases. The first interview was strictly standardized; the second was standardized for half the respondents and conversational for the others. Respondents in a second conversational interview answered differently than in the first interview more often, and for reasons that conformed more closely to official definitions, than respondents in a second standardized interview. This suggests that conversational interviewing improved comprehension, although it also lengthened interviews. We conclude that respondents in a national sample may misinterpret certain questions frequently enough to compromise data quality and that such misunderstandings cannot easily be eliminated by pretesting and rewording questions alone. More standardized comprehension may require less standardized interviewer behavior.


Discourse Processes | 1995

Speakers, addressees, and frames of reference: Whose effort is minimized in conversations about locations?

Michael F. Schober

When speakers describe locations, they must choose among taking their own perspective, their addressees, a shared frame of reference, and a neutral frame of reference that avoids the issue, among other options. This study examines whether speakers choose spatial perspectives that minimize effort for themselves, for their partners, or for both. It also examines whether perspectives are taken for particular individuals, for the speaker or addressee, or for the person who knows the information to be communicated. Three possible models are proposed for exactly how descriptions in a particular perspective are more difficult when speaker and addressee view a scene from different offsets. In a communication task, speakers described locations on a complex display for addressees who shared their vantage point or were offset by 90° or 180°. In these conversations, both partners either took the perspective of the person who did not know the location or used descriptions that helped them avoid choosing one or the ot...


Discourse Processes | 1999

How beliefs about a partner's goals affect referring in goal‐discrepant conversations

Alex W. Russell; Michael F. Schober

This study examines how interlocutors’ beliefs about each others goals (partner‐goal beliefs) affect conversational references. Pairs of participants whose mismatched conversational goals required getting information at a more or less specific level discussed abstract shapes. Pairs were either informed of the goal difference, misinformed that goals were the same, or noninformed about the goal difference. Partner‐goal beliefs affected how participants collaborated on references: Speakers tailored their descriptions to fit their beliefs about addressees’ goals, and addressees’ verbal feedback was affected by speakers’ descriptions. Misinformed and noninformed pairs never differed reliably in their language use, but speakers in these pairs described shapes, and their addressees responded to their descriptions, differently than informed pairs. Afterward, informed participants recognized the shapes more or less accurately depending on their individual goal, whereas in the misinformed and noninformed pairs, pa...


PLOS ONE | 2015

Precision and Disclosure in Text and Voice Interviews on Smartphones

Michael F. Schober; Frederick G. Conrad; Christopher Antoun; Patrick Ehlen; Stefanie Fail; Andrew L. Hupp; Michael V. Johnston; Lucas Vickers; H. Yanna Yan; Chan Zhang

As people increasingly communicate via asynchronous non-spoken modes on mobile devices, particularly text messaging (e.g., SMS), longstanding assumptions and practices of social measurement via telephone survey interviewing are being challenged. In the study reported here, 634 people who had agreed to participate in an interview on their iPhone were randomly assigned to answer 32 questions from US social surveys via text messaging or speech, administered either by a human interviewer or by an automated interviewing system. 10 interviewers from the University of Michigan Survey Research Center administered voice and text interviews; automated systems launched parallel text and voice interviews at the same time as the human interviews were launched. The key question was how the interview mode affected the quality of the response data, in particular the precision of numerical answers (how many were not rounded), variation in answers to multiple questions with the same response scale (differentiation), and disclosure of socially undesirable information. Texting led to higher quality data—fewer rounded numerical answers, more differentiated answers to a battery of questions, and more disclosure of sensitive information—than voice interviews, both with human and automated interviewers. Text respondents also reported a strong preference for future interviews by text. The findings suggest that people interviewed on mobile devices at a time and place that is convenient for them, even when they are multitasking, can give more trustworthy and accurate answers than those in more traditional spoken interviews. The findings also suggest that answers from text interviews, when aggregated across a sample, can tell a different story about a population than answers from voice interviews, potentially altering the policy implications from a survey.


Field Methods | 2016

Comparisons of Online Recruitment Strategies for Convenience Samples Craigslist, Google AdWords, Facebook, and Amazon Mechanical Turk

Christopher Antoun; Chan Zhang; Frederick G. Conrad; Michael F. Schober

The rise of social media websites (e.g., Facebook) and online services such as Google AdWords and Amazon Mechanical Turk (MTurk) offers new opportunities for researchers to recruit study participants. Although researchers have started to use these emerging methods, little is known about how they perform in terms of cost efficiency and, more importantly, the types of people that they ultimately recruit. Here, we report findings about the performance of four online sources for recruiting iPhone users to participate in a web survey. The findings reveal very different performances between two types of strategies: those that “pull in” online users actively looking for paid work (MTurk workers and Craigslist users) and those that “push out” a recruiting ad to online users engaged in other, unrelated online activities (Google AdWords and Facebook). The pull-method recruits were more cost efficient and committed to the survey task, while the push-method recruits were more demographically diverse.


Discourse Processes | 2007

Modeling Speech Disfluency to Predict Conceptual Misalignment in Speech Survey Interfaces

Patrick Ehlen; Michael F. Schober; Frederick G. Conrad

Computer-based interviewing systems could use models of respondent disfluency behaviors to predict a need for clarification of terms in survey questions. This study compares simulated speech interfaces that use two such models–a generic model and a stereotyped model that distinguishes between the speech of younger and older speakers–to several non-modeling speech interfaces in a task where respondents provided answers to survey questions from fictional scenarios. The modeling procedure found that the best predictor of conceptual misalignment was a critical Goldilocks range for response latency–hat is, a response time that is neither too slow nor too fast–outside of which responses are more likely to be conceptually misaligned. Different Goldilocks ranges are effective for younger and older speakers.


Poetics | 2001

Readers’ varying interpretations of theme in short fiction

Victoria Kurtz; Michael F. Schober

Abstract After reviewing arguments about the nature of thematic inferences and problems with previous empirical research, we report the results of a study examining both the process by which individual readers arrive at a fictional storys theme and the themes at which they arrive. Sixteen avid readers read two stories of microfiction paragraph by paragraph, commenting after each paragraph on the larger point the author might be making. At the end of each story, the participants stated a theme capturing the overall meaning of the story. The results showed that readers (1) differed substantially in their interpretations of the stories’ themes, (2) can draw the same conclusion about a story and yet make very different thematic inferences while reading, and (3) appear to keep alive a number of interpretations about a storys meaning, concluding the overall theme only at the storys end. The results strongly suggest that themes do not reside in texts in any obvious way but are constructed by readers. The results also suggest that thematic inferences are not computed automatically, as part of comprehension, but rather later as acts of interpretation.


Virtual Reality | 2006

Virtual environments for creative work in collaborative music-making

Michael F. Schober

Virtual environments are beginning to allow musicians to perform collaboratively in real time at a distance, coordinating on timing and conceptualization. The development of virtual spaces for collaboration necessitates more clearly specified theorizing about the nature of physical copresence in music-making: how the available communicative cues are likely to affect the nature of visually mediated rehearsal and performance. Pilot data for a project carried out at the New School for Social Research demonstrate some important factors relevant to designing remote spaces for musical collaboration, and suggest that virtual environments for musical collaboration could actually enhance the feeling of being together that creative musical expression requires.


Policy insights from the behavioral and brain sciences | 2015

Improving Social Measurement by Understanding Interaction in Survey Interviews

Michael F. Schober; Frederick G. Conrad

Many of the official statistics and leading indicators that inform policy decisions are created from aggregating data collected in scientific survey interviews. What happens in the back-and-forth of those interviews—whether a sampled member of the public agrees to participate or not, whether a respondent comprehends questions in the way they were intended or not, whether the interview is spoken or texted—can thus have far-reaching consequences. But the landscape for social measurement is rapidly changing: Participation rates are declining, and people’s daily communication patterns are evolving with new technologies (text messaging, video chatting, social media posting, etc.). New analyses of survey interactions are demonstrating aspects of interviewer speech that can substantially affect survey participation, which is vital if social measurement is to be trustworthy. Findings also suggest that, once a survey interview starts, the risks of misunderstanding and miscommunication are greater than one might expect, potentially jeopardizing the accuracy of survey results; different approaches to interviewing that allow clarification dialogue can improve respondents’ comprehension and thus survey data quality. Analyses of text messaging and voice interviews on smartphones demonstrate the importance of adapting scientific social measurement to new patterns of communication, adding ways for people to contribute their data at a time and in a mode that is convenient for them even when they are mobile or multitasking.

Collaboration


Dive into the Michael F. Schober's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christopher Antoun

United States Census Bureau

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Neta Spiro

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge