Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where James Wagner is active.

Publication


Featured researches published by James Wagner.


Public Opinion Quarterly | 2012

A Comparison of Alternative Indicators for the Risk of Nonresponse Bias

James Wagner

The response rate has played a key role in measuring the risk of nonresponse bias. However, recent empirical evidence has called into question the utility of the response rate for predicting nonresponse bias. The search for alternatives to the response rate has begun. The present article offers a typology for these indicators, briefly describes the strengths and weaknesses of each type, and suggests directions for future research. New standards for reporting on the risk of nonresponse bias may be needed. Certainly, any analysis into the risk of nonresponse bias will need to be multifaceted and include sensitivity analyses designed to test the impact of key assumptions about the data that are missing due to nonresponse.


Statistics in Medicine | 2010

A new stopping rule for surveys

James Wagner; Trivellore E. Raghunathan

Non-response is a problem for most surveys. In the sample design, non-response is often dealt with by setting a target response rate and inflating the sample size so that the desired number of interviews is reached. The decision to stop data collection is based largely on meeting the target response rate. A recent article by Rao, Glickman, and Glynn (RGG) suggests rules for stopping that are based on the survey data collected for the current set of respondents. Two of their rules compare estimates from fully imputed data where the imputations are based on a subset of early responders to fully imputed data where the imputations are based on the combined set of early and late responders. If these two estimates are different, then late responders are changing the estimate of interest. The present article develops a new rule for when to stop collecting data in a sample survey. The rule attempts to use complete interview data as well as covariates available on non-responders to determine when the probability that collecting additional data will change the survey estimate is sufficiently low to justify stopping data collection. The rule is compared with that of RGG using simulations and then is implemented using data from a real survey.


Mechanisms of Ageing and Development | 1990

Isolation and identification of aging-related cDNAs in the mouse

Varda Friedman; James Wagner; David B. Danner

To identify genes whose expression changes as a function of aging, we screened mouse cDNA libraries with cDNAs from mice of different ages. Specifically, whole-mouse cDNA libraries were constructed in lambda gt10 using poly(A) RNA from young (3 month) and old (27 month) C57BL/6J inbred mice and these lambda plaques were hybridized with radioactive cDNAs made from pooled poly(A) RNA from animals 3 or 33 months of age. Five clones were isolated that showed an aging-related pattern of expression and four of these were identified by computerized sequence matching to the GenBank database: MUP2 (a major urinary protein); Q10 of the MHC locus; a cytoskeletal actin gene; and creatine kinase. One gene whose expression increases with aging and is most abundant in spleen remains unidentified. All five cDNAs showed 4-fold to 17-fold changes with aging in their steady-state mRNA levels in at least one tissue.


Field Methods | 2014

Does Sequence Matter in Multi-Mode Surveys: Results from an Experiment.

James Wagner; Jennifer Arrieta; Heidi Guyer; Mary Beth Ofstedal

Interest in a multi-mode approach to surveys has grown substantially in recent years, in part due to increased costs of face-to-face interviewing and the emergence of the internet as a survey mode. Yet, there is little systematic evidence of the impact of a multimode approach on survey costs and errors. This paper reports the results of an experiment designed to evaluate whether a mixed-mode approach to a large screening survey would produce comparable response rates at a lower cost than a face-to-face screening effort. The experiment was carried out in the Health and Retirement Study (HRS), an ongoing panel study of Americans over age 50. In 2010, HRS conducted a household screening survey to recruit new sample members to supplement the existing sample. The experiment varied the sequence of modes with which the screening interview was delivered. One treatment offered mail first, followed by face-to-face interviewing; the other started with face-to-face and then mail. A control group was offered only face-to-face interviewing. Results suggest that the mixed mode options reduced costs without reducing response rates to the screening interview. There is some evidence, however, that the sequence of modes offered may impact the response rate for a follow-up in-depth interview.Interest in a multimode approach to surveys has grown substantially in recent years, in part due to increased costs of face-to-face (FtF) interviewing and the emergence of the Internet as a survey mode. Yet, there is little systematic evidence of the impact of a multimode approach on survey costs and errors. This article reports the results of an experiment designed to evaluate whether a mixed-mode approach to a large screening survey would produce comparable response rates at a lower cost than an FtF screening effort. The experiment was carried out in the Health and Retirement Study (HRS), an ongoing panel study of Americans over age 50. In 2010, HRS conducted a household screening survey to recruit new sample members to supplement the existing sample. The experiment varied the sequence of modes with which the screening interview was delivered. One treatment offered mail first, followed by FtF interviewing; the other started with FtF and then mail. A control group was offered only FtF interviewing. Results suggest that the mixed-mode options reduced costs without reducing response rates to the screening interview. There is some evidence, however, that the sequence of modes offered may impact the response rate for a follow-up in-depth interview.


Methods, data, analyses : a journal for quantitative methods and survey methodology (mda) | 2015

Maximizing Data Quality using Mode Switching in Mixed-Device Survey Design: Nonresponse Bias and Models of Demographic Behavior.

William G. Axinn; Heather H. Gatny; James Wagner

Conducting survey interviews on the internet has become an attractive method for lowering data collection costs and increasing the frequency of interviewing, especially in longitudinal studies. However, the advantages of the web mode for studies with frequent re-interviews can be offset by the serious disadvantage of low response rates and the potential for nonresponse bias to mislead investigators. Important life events, such as changes in employment status, relationship changes, or moving can cause attrition from longitudinal studies, producing the possibility of attrition bias. The potential extent of such bias in longitudinal web surveys is not well understood. We use data from the Relationship Dynamics and Social Life (RDSL) study to examine the potential for a mixed-device approach with active mode switching to reduce attrition bias. The RDSL design allows panel members to switch modes by integrating telephone interviewing into a longitudinal web survey with the objective of collecting weekly reports. We found that in this design allowing panel members to switch modes kept more participants in the study compared to a web only approach. The characteristics of persons who ever switched modes are different than those who did not – including not only demographic characteristics, but also baseline characteristics related to pregnancy and time-varying characteristics that were collected after the baseline interview. This was true in multivariate models that control for multiple of these dimensions simultaneously. We conclude that mode options and mode switching is important for the success of longitudinal web surveys to maximize participation and minimize attrition.


Social Science Computer Review | 2017

Timing the Mode Switch in a Sequential Mixed-Mode Survey An Experimental Evaluation of the Impact on Final Response Rates, Key Estimates, and Costs

James Wagner; Heather M. Schroeder; Andrew D. Piskorowski; Robert J. Ursano; Murray B. Stein; Steven G. Heeringa; Lisa J. Colpe

Mixed-mode surveys need to determine a number of design parameters that may have a strong influence on costs and errors. In a sequential mixed-mode design with web followed by telephone, one of these decisions is when to switch modes. The web mode is relatively inexpensive but produces lower response rates. The telephone mode complements the web mode in that it is relatively expensive but produces higher response rates. Among the potential negative consequences, delaying the switch from web to telephone may lead to lower response rates if the effectiveness of the prenotification contact materials is reduced by longer time lags, or if the additional e-mail reminders to complete the web survey annoy the sampled person. On the positive side, delaying the switch may decrease the costs of the survey. We evaluate these costs and errors by experimentally testing four different timings (1, 2, 3, or 4 weeks) for the mode switch in a web–telephone survey. This experiment was conducted on the fourth wave of a longitudinal study of the mental health of soldiers in the U.S. Army. We find that the different timings of the switch in the range of 1–4 weeks do not produce differences in final response rates or key estimates but longer delays before switching do lead to lower costs.


Journal of Official Statistics | 2017

Total Survey Error and Respondent Driven Sampling: Focus on Nonresponse and Measurement Errors in the Recruitment Process and the Network Size Reports and Implications for Inferences

Sunghee Lee; Tuba Suzer-Gurtekin; James Wagner; Richard Valliant

Abstract This study attempted to integrate key assumptions in Respondent-Driven Sampling (RDS) into the Total Survey Error (TSE) perspectives and examine TSE as a new framework for a systematic assessment of RDS errors. Using two publicly available data sets on HIV-at-risk persons, nonresponse error in the RDS recruitment process and measurement error in network size reports were examined. On nonresponse, the ascertained partial nonresponse rate was high, and a substantial proportion of recruitment chains died early. Moreover, nonresponse occurred systematically: recruiters with lower income and higher health risks generated more recruits; and peers of closer relationships were more likely to accept recruitment coupons. This suggests a lack of randomness in the recruitment process, also shown through sizable intra-chain correlation. Self-reported network sizes suggested measurement error, given their wide dispersion and unreasonable reports. This measurement error has further implications for the current RDS estimators, which use network sizes as an adjustment factor on the assumption of a positive relationship between network sizes and selection probabilities in recruitment. The adjustment resulted in nontrivial unequal weighting effects and changed estimates in directions that were difficult to explain and, at times, illogical. Moreover, recruiters’ network size played no role in actual recruitment. TSE may serve as a tool for evaluating errors in RDS, which further informs study design decisions and inference approaches.


Social Science Research | 2018

New options for national population surveys: The implications of internet and smartphone coverage

Mick P. Couper; Garret Gremel; William G. Axinn; Heidi Guyer; James Wagner; Brady T. West

Challenges to survey data collection have increased the costs of social research via face-to-face surveys so much that it may become extremely difficult for social scientists to continue using these methods. A key drawback to less expensive Internet-based alternatives is the threat of biased results from coverage errors in survey data. The rise of Internet-enabled smartphones presents an opportunity to re-examine the issue of Internet coverage for surveys and its implications for coverage bias. Two questions (on Internet access and smartphone ownership) were added to the National Survey of Family Growth (NSFG), a U.S. national probability survey of women and men age 15-44, using a continuous sample design. We examine 16 quarters (4 years) of data, from September 2012 to August 2016. Overall, we estimate that 82.9% of the target NSFG population has Internet access, and 81.6% has a smartphone. Combined, this means that about 90.7% of U.S. residents age 15-44 have Internet access, via either traditional devices or a smartphone. We find some evidence of compensatory coverage when looking at key race/ethnicity and age subgroups. For instance, while Black teens (15-18) have the lowest estimated rate of Internet access (81.9%) and the lowest rate of smartphone usage (72.6%), an estimated 88.0% of this subgroup has some form of Internet access. We also examine the socio-demographic correlates of Internet and smartphone coverage, separately and combined, as indicators of technology access in this population. In addition, we look at the effect of differential coverage on key estimates produced by the NSFG, related to fertility, family formation, and sexual activity. While this does not address nonresponse or measurement biases that may differ for alternative modes, our paper has implications for possible coverage biases that may arise when switching to a Web-based mode of data collection, either for follow-up surveys or to replace the main face-to-face data collection.


Journal of Official Statistics | 2018

A Study of Interviewer Compliance in 2013 and 2014 Census Test Adaptive Designs

Gina Walejko; James Wagner

Abstract Researchers are interested in the effectiveness of adaptive and responsive survey designs that monitor and respond to data using tailored or targeted interventions. These designs often require adherence to protocols, which can be difficult when surveys allow in-person interviewers flexibility in managing cases. This article describes examples of interviewer noncompliance and compliance in adaptive design experiments that occurred in two United States decennial census tests. The two studies tested adaptive procedures including having interviewers work prioritized cases and substitute face-to-face attempts with telephone calls. When to perform such procedures was communicated to interviewers via case management systems that necessitated twice-daily transmissions of data. We discuss reasons when noncompliance may occur and ways to improve compliance.


Journal of Official Statistics | 2018

An Analysis of Interviewer Travel and Field Outcomes in Two Field Surveys

James Wagner; Kristen Olson

Abstract In this article, we investigate the relationship between interviewer travel behavior and field outcomes, such as contact rates, response rates, and contact attempts in two studies, the National Survey of Family Growth and the Health and Retirement Study. Using call record paradata that have been aggregated to interviewer-day levels, we examine two important cost drivers as measures of interviewer travel behavior: the distance that interviewers travel to segments and the number of segments visited on an interviewer-day. We explore several predictors of these measures of travel - the geographic size of the sampled areas, measures of urbanicity, and other sample and interviewer characteristics. We also explore the relationship between travel and field outcomes, such as the number of contact attempts made and response rates.We find that the number of segments that are visited on each interviewer-day has a strong association with field outcomes, but the number of miles travelled does not. These findings suggest that survey organizations should routinely monitor the number of segments that interviewers visit, and that more direct measurement of interviewer travel behavior is needed.

Collaboration


Dive into the James Wagner's collaboration.

Top Co-Authors

Avatar

Kristen Olson

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Heidi Guyer

University of Michigan

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sunghee Lee

University of Michigan

View shared research outputs
Researchain Logo
Decentralizing Knowledge