Peter V. Miller
Northwestern University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Peter V. Miller.
Sociological Methodology | 1981
Charles F. Cannell; Peter V. Miller; Lois Oksenberg
The survey interviewer, as gatekeeper to the attitudes, experiences, and perceptions of respondents, occupies a prominent position in survey research and has been a subject of concern since the emergence of systematic survey research. Recognition of the interviewers potential for manipulating or distorting responses has generated numerous approaches aimed at controlling the interviewers influence on responses. Over the years, rules for interviewer behavior have evolved
Public Opinion Quarterly | 1987
David Protess; Fay Lomax Cook; Thomas R. Curtin; Margaret T. Gordon; D.R. Leff; Maxwell McCombs; Peter V. Miller
This article reports the fourth in a continuing series of case studies that explore the impact of news media investigative journalism on the general public, policymakers, and public pol- icy. The media disclosures in this field experiment had limited effects on the general public but were influential in changing the attitudes of policymakers. The study describes how changes in public policymaking resulted from collaboration between journal- ists and government officials. The authors develop a model that is a beginning step toward specifying the conditions under which media investigations influence public attitudes and agendas. This article reports the fourth in a series of field experiments that test the agenda-setting hypothesis (McCombs and Shaw, 1972) for news
Public Opinion Quarterly | 1984
Peter V. Miller
THE MARKED growth in the use of the telephone for survey interviewing has spurred a number of queries about the effects of that medium on data quality. These concerns generally have involved the comparability of results obtained in telephone and face-to-face interviews; a related issue is whether survey techniques developed for face-to-face interviews can be utilized in telephone contacts. For example, can items which employ numerical scales be presented to telephone respondents without the visual aids commonly accompanying them in person interviews? As a contribution to this debate, this paper presents the findings of a national study which experimentally compared two methods of presenting seven-point scale attitude questions to telephone respondents. One approach simply presented the response categories in the questions
Public Opinion Quarterly | 1985
Peter V. Miller; Robert M. Groves
Record check studies-involving the comparison of survey rcsponses with external record evidence-are a familiar tool in survey methodology. The findings of a recently conducted reverse record check study are reported here. The analyses examine match rates between survey reports and police records, employing more or less restrictive match criteria-e.g., using various computer algorithms versus human judgments. The analyses reveal marked differences in the level of survey-record correspondence. Since the level of match rate appears highly variable depending on the definition of a match, we advocate reexamination of the lessons of previous record check studies which employed only vaguely specified match criteria. We argue, further, that record evidence may best be employed in constructing alternative indicators of phenomena to be measured, rather than as the arbiter of survey response quality. Peter V. Miller is Associate Professor of Communication Studies and Research Faculty, Center for Urban Affairs and Policy Research, Northwestern University. Robert M. Groves is Associate Research Scientist, Survey Research Center, Institute for Social Research, University of Michigan. Research for this article was partially supprted by a contract from the Bureau of Justice Statistics, U.S. Department of Justice. The article does not necessarily represent the views of the Department of Justice. The authors are indebted to Allen H. Andrews, Superintendent of Police, Peoria, Illinois, for making the study possible and to Dr. Charles Cannell, Dr. Charles Cowan, and Dr. Wesley Skogan for insightful comments on an earlier draft. The authors are responsible for any errors which remain. Public Opinion Quarterly Vol 49 366-380 co by the Trustees ol Columbia University Published bv Elsevier Science Ptblishing Co . Inc M033-362X/85A1(149-3661
Public Opinion Quarterly | 1982
Peter V. Miller; Charles F. Cannell
2 50 This content downloaded from 207.46.13.189 on Thu, 04 Aug 2016 06:27:00 UTC All use subject to http://about.jstor.org/terms MATCHING SURVEY RESPONSES TO OFFICIAL RECORDS 367
Journal of Drug Issues | 2000
Michael Fendrich; Peter V. Miller
TELEPHONE surveys are not a new phenomenon, but academic and governmental organizations have only recently made serious investments in the method. As of this writing, for example, the Bureau of the Census has conducted only one study involving cold telephone interviews-a random-digit dialing sample of Michigan residents. Reducing the cost of surveys is the primary motivation for developments in telephone interviewing. Many researchers still see the telephone interview as the somewhat disreputable poor relation of the personal contact. If it were not for financial exigencies, these people would probably not view the telephone as a preferable (or even a viable) means of collecting survey data. This attitude may be one reason why we know little about what constitutes effective telephone interviewing techniques. The telephone survey has been treated as a necessity, rather than as an
Public Opinion Quarterly | 1995
Peter V. Miller
Over the last three decades, policy makers have taken an increasing interest in drug use statistics for the general population and for special subgroups: adolescents, pregnant women, and those occupying our prisons and jails. The growth of interest in drug use data has been accompanied by a growth in research investigating a range of issues associated with drug use measurement validity. While some of this research finds its way into interdisciplinary journals such as the Journal ofDrug Issues (JDl), much of it languishes in narrower venues that many drug use researchers rarely see. This special issue was undertaken on the premise that those ofus doing research on drug use could benefit from a volume devoted to a multidimensional, multidisciplinary approach to measurement issues. The goal of this special issue is to provide researchers with exposure to a growing body of research evaluating the quality of drug use measures, comparing alternative methods of survey implementation, and employing multiple alternative assessments of drug use (including biological and physical measures). We began with an open solicitation for research papers addressing a full array of issues associated with drug use measurement that was distilled into six main themes: I) the validity ofdrug use reporting in surveys; 2) the impact ofmode of data collection on drug use reporting and prevalence estimation; 3) the accuracy ofdrug use prevalence estimates derived from population surveys; 4) innovative measurement, data collection and sampling strategies; 5) alternative approaches to measuring drug use in special populations; and 6) the utility, feasibility, and limitations ofdrug testing in epidemiological research. Interest in this solicitation was considerable, and we received a variety of submissions covering the gamut
Public Opinion Quarterly | 1996
Peter V. Miller
The National Health and Social Life Survey (NHSLS)-perhaps better known as the Sex Survey-is remarkable in many ways. A groundbreaking study of sexual behavior, a forceful demonstration of the value of free scientific inquiry, a tribute to individual and organizational courage and tenacity, the NHSLS sets a kind of standard for the conduct of survey research. But the standard set by the NHSLS owes more to the studys subject matter, the societal knowledge vacuum in which it was conducted, and the kafkaesque political adversities it overcame than to its methodological innovativeness. In many respects, the NHSLS is a consciously old-fashioned survey. To the extent that it contributes to our understanding of sexual behavior, the study reinforces belief in tried-andtrue survey practices applied in a novel setting. To the extent that there is doubt about its findings, the NHSLS highlights perennial gaps in our knowledge of how surveys ought to be done. This review examines the NHSLS in a number of contextshistorical, political, and methodological-and offers comparisons between it and other sex surveys. In keeping with the purpose of the Poll Review section, its focus is on lessons of the study for the practice of survey research, rather than on its contribution to sexology (although the two are not always possible to separate).
Sociological Methodology | 2016
Peter V. Miller
Catania, Canchola, and Pollacks response to my review of the National Health and Social Life Survey (NHSLS) is largely a response to the NHISLS itself. It appears that they have read my review as an apology for the NHSLS and a criticism of the National AIDS and Behavioral Surveys (NABS), of whilch Catania was principal investigator. On the chance that others shared thfis inference, their reply offers a welcome opportunity to clear up the misconception and to discuss further some of the issues raised in my revlew. To, set the record straight, the purpose in comparing the NHSLS and the NABS was not to uphold one and reject the other, but to put the NHSLS into an explanatory context. I juxtaposed the designs and findings of these studies (along with NATSAL [the National Survey of Sexual Attitudes and Lifestyles], the British national sex survey) to elucidate some methodological, political, and social issues involved in surveys that focus on sexual behavior. I made some judgments-positive and negative-abouw methods employed in the NHSLS and arguments made in The Sociat 0rganization of Sexuality (Laumann et al. 1994). I used the NAIRS as a point of comparison but did not examine it thoroughly nor offer any judgment about its methodological quality.
Public Opinion Quarterly | 2008
Mick P. Couper; Peter V. Miller
I: Did you have some favorite subjects [in school]? R: Yes, typing. I: Good, and how is your speed in typing? R: Well, when I finished school I had between 57 and 60 words per minute. I: I see. R: Of course, I haven’t been doing too much typing. With my job it wasn’t required, and so I’m a little rusty right now. I: Sure. That’s something you have to keep up on and do every day. R: Yes, once you get your speed up there, I find, if you stop, it goes down to about 50. I went all summer, and when I started school it was down to about 35, but then it came right up again.