Miao Yang
University of Notre Dame
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Miao Yang.
Psycho-oncology | 2016
Thomas V. Merluzzi; Errol J. Philip; Miao Yang; Carolyn A. Heitzmann
Optimal matching theory posits that the effects of social support are enhanced when its provision is matched with need for support. We hypothesized that matching received social support with the needs of persons with cancer, and cancer survivors would be related to better psychosocial adjustment than a mismatched condition.
Structural Equation Modeling | 2018
Miao Yang; Ge Jiang; Ke-Hai Yuan
Among test statistics for assessing overall model fit in structural equation modeling (SEM), the Satorra–Bentler rescaled statistic is most widely used when the normality assumption is violated. However, many researchers have found that tends to overreject correct models when the number of variables (p) is large and/or the sample size (N) is small. Modifications of have been proposed, but few studies have examined their performance against each other, especially when p is large. This article systematically evaluates 10 corrected versions of . Results show that the Bartlett correction and a recently proposed rank correction perform better than others in controlling Type I error rates, according to their deviations from the nominal rate. Nevertheless, the performance of both corrections depends heavily on p in addition to N. As p becomes relatively large, none of the corrected versions can properly control Type I errors even when N is rather large.
Multivariate Behavioral Research | 2017
Ke-Hai Yuan; Miao Yang; Ge Jiang
ABSTRACT Survey data often contain many variables. Structural equation modeling (SEM) is commonly used in analyzing such data. With typical nonnormally distributed data in practice, a rescaled statistic Trml proposed by Satorra and Bentler was recommended in the literature of SEM. However, Trml has been shown to be problematic when the sample size N is small and/or the number of variables p is large. There does not exist a reliable test statistic for SEM with small N or large p, especially with nonnormally distributed data. Following the principle of Bartlett correction, this article develops empirical corrections to Trml so that the mean of the empirically corrected statistics approximately equals the degrees of freedom of the nominal chi-square distribution. Results show that empirically corrected statistics control type I errors reasonably well even when N is smaller than 2p, where Trml may reject the correct model 100% even for normally distributed data. The application of the empirically corrected statistics is illustrated via a real data example.
Multivariate Behavioral Research | 2016
Miao Yang; Ke-Hai Yuan
ABSTRACT Moderation analysis has many applications in social sciences. Most widely used estimation methods for moderation analysis assume that errors are normally distributed and homoscedastic. When these assumptions are not met, the results from a classical moderation analysis can be misleading. For more reliable moderation analysis, this article proposes two robust methods with a two-level regression model when the predictors do not contain measurement error. One method is based on maximum likelihood with Students t distribution and the other is based on M-estimators with Huber-type weights. An algorithm for obtaining the robust estimators is developed. Consistent estimates of standard errors of the robust estimators are provided. The robust approaches are compared against normal-distribution-based maximum likelihood (NML) with respect to power and accuracy of parameter estimates through a simulation study. Results show that the robust approaches outperform NML under various distributional conditions. Application of the robust methods is illustrated through a real data example. An R program is developed and documented to facilitate the application of the robust methods.
Structural Equation Modeling | 2018
Ke-Hai Yuan; Ge Jiang; Miao Yang
Mean and mean-and-variance corrections are the 2 major principles to develop test statistics with violation of conditions. In structural equation modeling (SEM), mean-rescaled and mean-and-variance-adjusted test statistics have been recommended under different contexts. However, recent studies indicated that their Type I error rates vary from 0% to 100% as the number of variables p increases. Can we still trust the 2 principles and what alternative rules can be used to develop test statistics for SEM with “big data”? This article addresses the issues by a large-scale Monte Carlo study. Results indicate that empirical means and standard deviations of each statistic can differ from their expected values many times in standardized units when p is large. Thus, the problems in Type I error control with the 2 statistics are because they do not possess the properties to which they are entitled, not because of the wrongdoing of the mean and mean-and-variance corrections. However, the 2 principles need to be implemented using small sample methodology instead of asymptotics. Results also indicate that distributions other than chi-square might better describe the behavior of test statistics in SEM with big data.
Psychological Assessment | 2017
Thomas V. Merluzzi; Errol J. Philip; Carolyn A. Heitzmann Ruhf; Haiyan Liu; Miao Yang; Claire C. Conley
Based on self-regulation and self-efficacy theories, the Cancer Behavior Inventory (CBI; Heitzmann et al., 2011; Merluzzi & Martinez Sanchez, 1997; Merluzzi, Nairn, Hegde, Martinez Sanchez, & Dunn, 2001) was developed as a measure of self-efficacy strategies for coping with cancer. In the latest revision, CBI-V3.0, a number of psychometric and empirical advances were made: (a) the reading level was reduced to 6th-grade level; (b) individual interviews and focus groups were used to revise items; (c) a new spiritual coping subscale was added; (d) data were collected from 4 samples (total N = 1,405) to conduct an exploratory factor analysis with targeted rotation, 2 confirmatory factor analyses, and differential item functioning; (e) item trimming was used to reduce the total number to 27; (f) internal consistency and test–retest reliability were computed; and (g) extensive validity testing was conducted. The results, which build upon the strengths of prior versions, confirm a structurally and psychometrically sound and unbiased measure of self-efficacy strategies for coping with cancer with a reduced number of items for ease of administration. The factors include Maintaining Activity and Independence, Seeking and Understanding Medical Information, Emotion Regulation, Coping With Treatment Related Side Effects, Accepting Cancer/Maintaining a Positive Attitude, Seeking Social Support, and Using Spiritual Coping. Internal consistency (&agr; = .946), test–retest reliability (r = .890; 4 months), and validity coefficients with a variety of relevant measures indicated strong psychometric properties. The new 27-item CBI-V3.0 has both research utility and clinical utility as a screening and treatment-planning measure of self-efficacy strategies for coping with cancer.
Structural Equation Modeling | 2018
Lijuan Wang; Miao Yang; Xiao Liu
This study discusses the effects of oversimplifying the between-subject covariance structure on inferences for fixed effects in modeling nested data. Linear and quadratic growth curve models (GCMs) with both full and simplified between-subject covariance structures were fit to real longitudinal data. The results were contradictory to the statement that using oversimplified between-subject covariance structures (e.g., uni-level analysis) leads to underestimated standard errors of fixed effect estimates and thus inflated Type I error rates. We analytically derived simple mathematical forms to systematically examine the oversimplification effects for the linear GCMs. The derivation results were aligned with the real data analysis results and further revealed the conditions under which the standard errors of the fixed-effect intercept and slope estimates could be underestimated or overestimated for over-simplified linear GCMs. Therefore, our results showed that the underestimation statement is a myth and can be misleading. Implications are discussed and recommendations are provided.
Structural Equation Modeling | 2018
Miao Yang; Ke-Hai Yuan
Ridge generalized least squares (RGLS) is a recently proposed estimation procedure for structural equation modeling. In the formulation of RGLS, there is a key element, ridge tuning parameter, whose value determines the efficiency of parameter estimates. This article aims to optimize RGLS by developing formulas for the ridge tuning parameter to yield the most efficient parameter estimates in practice. For the formulas to have a wide scope of applicability, they are calibrated using empirical efficiency and via many conditions on population distribution, sample size, number of variables, and model structure. Results show that RGLS with the tuning parameter determined by the formulas can substantially improve the efficiency of parameter estimates over commonly used procedures with real data being typically nonnormally distributed.
Psycho-oncology | 2018
Thomas V. Merluzzi; Samantha Serpentini; Errol J. Philip; Miao Yang; Natalia Salamanca-Balen; Carolyn A. Heitzmann Ruhf; Antonio Catarinella
Social relationship coping efficacy (SRCE) is the confidence to engage in behaviors that can maintain or enhance close social relationships in the context of illness. This study focused on psychometric analyses of the SRCE scale and its role in maintaining or enhancing personal relationships, social support, and quality of life (QOL).
Multivariate Behavioral Research | 2018
Miao Yang; Ge Jiang
Trustworthy test statistics are of critical importance for evaluating overall model fit in structural equation modeling (SEM).While the likelihood ratio statistic TML derived from the normality assumption is most widely used, real data seldom follow normal distributions. A statistic widely recommended with nonnormally distributed data is the Satorra-Bentler mean-rescaled statistic TRML. However, TRML can also be problematic when sample size (N) is small, especially with “big data” that are characterized by a large p. Various corrections have been developed to improve the behavior of TML and TRML. We investigated the performance of six existing corrected rescaled statistics, including a correction by Swain (1975), two versions of Bartlett correction (Bartlett, 1951; Yuan, 2005), and three versions of rank correction (Jiang, & Yuan, 2017). We also proposed 4 additional rescaled statistics by applying recent developments in Yuan, Tian, & Yanagihara, 2015 for TML to TRML. In total, we studied the performance of TRML and 10 corrected rescaled statistics. A Monte Carlo simulation was conducted to evaluate the performance of the 11 statistics. Data were generated from a LISREL model with 2 exogenous variables and 3 endogenous variables. Manipulated factors include p (15 to 80),N (70 to 2500), and population distributions (multivariate normal and nonnormal). The simulation results showed that the 10 corrected rescaled statistics all outperformedTRML. In particular, the general Bartlett correction (Bartlett, 1951) and the recently proposed rank correction (Jiang,&Yuan, 2017) performed better than the other corrections in terms of overall type I error control. However, with large p and insufficiently large N (e.g., p ≥ 40 and N ≤ 500), none of the corrected rescaled statistics could be trusted. More specifically, the rank corrected rescaled statistics often yielded 0% rejection whereas the nominal rate is 5%. On the other hand, the Bartlett corrected rescaled statistics could reject correct models as high as almost 100% and as low as 0%. The results suggested