Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John J Dziak is active.

Publication


Featured researches published by John J Dziak.


Psychological Methods | 2009

Design of Experiments with Multiple Independent Variables: A Resource Management Perspective on Complete and Reduced Factorial Designs

Linda M. Collins; John J Dziak; Runze Li

An investigator who plans to conduct an experiment with multiple independent variables must decide whether to use a complete or reduced factorial design. This article advocates a resource management perspective on making this decision, in which the investigator seeks a strategic balance between service to scientific objectives and economy. Considerations in making design decisions include whether research questions are framed as main effects or simple effects; whether and which effects are aliased (confounded) in a particular design; the number of experimental conditions that must be implemented in a particular design and the number of experimental subjects the design requires to maintain the desired level of statistical power; and the costs associated with implementing experimental conditions and obtaining experimental subjects. In this article 4 design options are compared: complete factorial, individual experiments, single factor, and fractional factorial. Complete and fractional factorial designs and single-factor designs are generally more economical than conducting individual experiments on each factor. Although relatively unfamiliar to behavioral scientists, fractional factorial designs merit serious consideration because of their economy and versatility.


Structural Equation Modeling | 2014

Effect Size, Statistical Power and Sample Size Requirements for the Bootstrap Likelihood Ratio Test in Latent Class Analysis.

John J Dziak; Stephanie T. Lanza; Xianming Tan

Selecting the number of different classes that will be assumed to exist in the population is an important step in latent class analysis (LCA). The bootstrap likelihood ratio test (BLRT) provides a data-driven way to evaluate the relative adequacy of a (K – 1)-class model compared to a K-class model. However, very little is known about how to predict the power or the required sample size for the BLRT in LCA. Based on extensive Monte Carlo simulations, we provide practical effect size measures and power curves that can be used to predict power for the BLRT in LCA given a proposed sample size and a set of hypothesized population parameters. Estimated power curves and tables provide guidance for researchers wishing to size a study to have sufficient power to detect hypothesized underlying latent classes.


American Journal of Preventive Medicine | 2014

Factorial Experiments: Efficient Tools for Evaluation of Intervention Components

Linda M. Collins; John J Dziak; Kari C. Kugler; Jessica B Trail

BACKGROUND An understanding of the individual and combined effects of a set of intervention components is important for moving the science of preventive medicine interventions forward. This understanding can often be achieved in an efficient and economical way via a factorial experiment, in which two or more independent variables are manipulated. The factorial experiment is a complement to the RCT; the two designs address different research questions. PURPOSE To offer an introduction to factorial experiments aimed at investigators trained primarily in the RCT. METHODS The factorial experiment is compared and contrasted with other experimental designs used commonly in intervention science to highlight where each is most efficient and appropriate. RESULTS Several points are made: factorial experiments make very efficient use of experimental subjects when the data are properly analyzed; a factorial experiment can have excellent statistical power even if it has relatively few subjects per experimental condition; and when conducting research to select components for inclusion in a multicomponent intervention, interactions should be studied rather than avoided. CONCLUSIONS Investigators in preventive medicine and related areas should begin considering factorial experiments alongside other approaches. Experimental designs should be chosen from a resource management perspective, which states that the best experimental design is the one that provides the greatest scientific benefit without exceeding available resources.


Psychological Methods | 2012

Multilevel Factorial Experiments for Developing Behavioral Interventions: Power, Sample Size, and Resource Considerations.

John J Dziak; Inbal Nahum-Shani; Linda M. Collins

Factorial experimental designs have many potential advantages for behavioral scientists. For example, such designs may be useful in building more potent interventions by helping investigators to screen several candidate intervention components simultaneously and to decide which are likely to offer greater benefit before evaluating the intervention as a whole. However, sample size and power considerations may challenge investigators attempting to apply such designs, especially when the population of interest is multilevel (e.g., when students are nested within schools, or when employees are nested within organizations). In this article, we examine the feasibility of factorial experimental designs with multiple factors in a multilevel, clustered setting (i.e., of multilevel, multifactor experiments). We conduct Monte Carlo simulations to demonstrate how design elements-such as the number of clusters, the number of lower-level units, and the intraclass correlation-affect power. Our results suggest that multilevel, multifactor experiments are feasible for factor-screening purposes because of the economical properties of complete and fractional factorial experimental designs. We also discuss resources for sample size planning and power estimation for multilevel factorial experiments. These results are discussed from a resource management perspective, in which the goal is to choose a design that maximizes the scientific benefit using the resources available for an investigation.


Methodology | 2016

Comparing the Performance of Improved Classify-Analyze Approaches for Distal Outcomes in Latent Profile Analysis

John J Dziak; Bethany C. Bray; Jieting Zhang; Minqiang Zhang; Stephanie T. Lanza

Several approaches are available for estimating the relationship of latent class membership to distal outcomes in latent profile analysis (LPA). A three-step approach is commonly used, but has problems with estimation bias and confidence interval coverage. Proposed improvements include the correction method of Bolck, Croon, and Hagenaars (BCH; 2004), Vermunts (2010) maximum likelihood (ML) approach, and the inclusive three-step approach of Bray, Lanza, & Tan (2015). These methods have been studied in the related case of latent class analysis (LCA) with categorical indicators, but not as well studied for LPA with continuous indicators. We investigated the performance of these approaches in LPA with normally distributed indicators, under different conditions of distal outcome distribution, class measurement quality, relative latent class size, and strength of association between latent class and the distal outcome. The modified BCH implemented in Latent GOLD had excellent performance. The maximum likelihood and inclusive approaches were not robust to violations of distributional assumptions. These findings broadly agree with and extend the results presented by Bakk and Vermunt (2016) in the context of LCA with categorical indicators.


Archive | 2018

Coding and Interpretation of Effects in Analysis of Data from a Factorial Experiment

Kari C. Kugler; John J Dziak; Jessica Trail

This chapter is intended to describe the differences between effect coding and dummy coding when the multiple regression approach is used to perform analysis of variance (ANOVA) with balanced (i.e., an equal number of subjects in each experimental condition) factorial designs. Using a hypothetical example of a 23 factorial experiment, we present these two coding schemes for categorical independent variables and explain how the effects estimated can have different interpretations depending on the coding scheme used. Particular attention is paid to highlighting how and why differences exist and when a researcher might want to use one coding scheme over another. We demonstrate that effect coding is usually preferred for analyzing data from factorial experiments in the optimization phase of the multiphase optimization strategy.


Statistics in Medicine | 2014

Time‐varying effect models for ordinal responses with applications in substance abuse research

John J Dziak; Runze Li; Marc A. Zimmerman; Anne Buu

Ordinal responses are very common in longitudinal data collected from substance abuse research or other behavioral research. This study develops a new statistical model with free SAS macros that can be applied to characterize time-varying effects on ordinal responses. Our simulation study shows that the ordinal-scale time-varying effects model has very low estimation bias and sometimes offers considerably better performance when fitting data with ordinal responses than a model that treats the response as continuous. Contrary to a common assumption that an ordinal scale with several levels can be treated as continuous, our results indicate that it is not so much the number of levels on the ordinal scale but rather the skewness of the distribution that makes a difference on relative performance of linear versus ordinal models. We use longitudinal data from a well-known study on youth at high risk for substance abuse as a motivating example to demonstrate that the proposed model can characterize the time-varying effect of negative peer influences on alcohol use in a way that is more consistent with the developmental theory and existing literature, in comparison with the linear time-varying effect model.


Archive | 2018

Multilevel Factorial Designs in Intervention Development

Inbal Nahum-Shani; John J Dziak

Factorial designs are one of the many useful experimental tools that can be used to inform the construction of multicomponent behavioral, biobehavioral, and biomedical interventions. Clustering presents various challenges to investigators aiming to implement such designs. Clustering means that some or all individuals are nested in higher-level social or administrative units (e.g., schools, therapy groups). These multilevel settings generate dependency in data within clusters because individuals in one cluster tend to be more similar to each other than to individuals in other clusters. Such dependency has implications for the design of the factorial experiment, the model used to analyze the data, and the power for detecting the effects of interest. In this chapter, we discuss five classes of multilevel factorial designs that vary in terms of the nature of clustering (i.e., the process by which individuals become clustered or the reason why they are considered to be clustered), as well as the randomization scheme employed (i.e., whether randomization to experimental conditions is done at the individual level, the cluster level, or both). For each of the five classes, we discuss the scientific motivation for employing the multilevel factorial design, provide a model for analyzing data arising from employing a multilevel factorial design of this class, and offer formulas that investigators can use to calculate the expected power. Design considerations are also discussed with respect to each class. Our goal is to provide a comprehensive review to help investigators select the most suitable design given their scientific questions, target population, and available resources.


Archive | 2018

Optimizing the Cost-Effectiveness of a Multicomponent Intervention Using Data from a Factorial Experiment: Considerations, Open Questions, and Tradeoffs Among Multiple Outcomes

John J Dziak

Cost-effectiveness—increasing the benefit obtained for a given expenditure of time or money—is an important idea in many applied research fields. It is one important quality that a researcher interested in the multiphase optimization strategy (MOST) may wish to optimize. However, further research is needed about how to best incorporate cost information into the analysis of factorial experiments typically used during the optimization phase of MOST. This chapter will review the issues involved in making cost-effectiveness judgments using the results of factorial experiments and explore some possibilities for further methodological research on how best to estimate and compare cost-effectiveness using the results of factorial experiments.


Journal of Addictive Diseases | 2017

Contemporary alcohol use patterns among a national sample of U.S. adult drinkers

Ashley N. Linden-Carmichael; Stephanie T. Lanza; John J Dziak; Bethany C. Bray

ABSTRACT The aim of the current article is to identify subgroups of adult drinkers characterized by typical drinking patterns. Data from the National Epidemiologic Survey on Alcohol and Related Conditions-III were used to classify drinkers based on several indicators of drinking. Past-year drinkers aged 18–64 were included (n = 22,776). Latent class analysis revealed a 5-class model: Occasional, Light Drinkers (28%), Frequent Drinkers (25%), Infrequent Drinkers with Occasional Binging (5%), Frequent Drinkers with Occasional Binging (22%), and High-Intensity Drinkers (20%). Although most were Light Drinkers, many engaged in excessive drinking. Given the potential risk for harm, prevention efforts are warranted particularly for High-Intensity Drinkers.

Collaboration


Dive into the John J Dziak's collaboration.

Top Co-Authors

Avatar

Stephanie T. Lanza

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Linda M. Collins

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Bethany C. Bray

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Runze Li

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kari C. Kugler

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Xianming Tan

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David R. Lemmon

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Jessica B Trail

Pennsylvania State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge