Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xiao Hua Zhou is active.

Publication


Featured researches published by Xiao Hua Zhou.


Autism | 2009

Parenting stress and psychological functioning among mothers of preschool children with autism and developmental delay

Annette Estes; Jeffrey Munson; Geraldine Dawson; Elizabeth Koehler; Xiao Hua Zhou; Robert D. Abbott

Parents of children with developmental disabilities, particularly autism spectrum disorders (ASDs), are at risk for high levels of distress. The factors contributing to this are unclear. This study investigated how child characteristics influence maternal parenting stress and psychological distress. Participants consisted of mothers and developmental-age matched preschool-aged children with ASD (N = 51) and developmental delay without autism (DD) ( N = 22). Evidence for higher levels of parenting stress and psychological distress was found in mothers in the ASD group compared to the DD group. Childrens problem behavior was associated with increased parenting stress and psychological distress in mothers in the ASD and DD groups. This relationship was stronger in the DD group. Daily living skills were not related to parenting stress or psychological distress. Results suggest clinical services aiming to support parents should include a focus on reducing problem behaviors in children with developmental disabilities.


Journal of the American Medical Informatics Association | 1997

A Randomized Trial of “Corollary Orders” to Prevent Errors of Omission

J. Marc Overhage; William M. Tierney; Xiao Hua Zhou; Clement J. McDonald

OBJECTIVE Errors of omission are a common cause of systems failures. Physicians often fail to order tests or treatments needed to monitor/ameliorate the effects of other tests or treatments. The authors hypothesized that automated, guideline-based reminders to physicians, provided as they wrote orders, could reduce these omissions. DESIGN The study was performed on the inpatient general medicine ward of a public teaching hospital. Faculty and housestaff from the Indiana University School of Medicine, who used computer workstations to write orders, were randomized to intervention and control groups. As intervention physicians wrote orders for 1 of 87 selected tests or treatments, the computer suggested corollary orders needed to detect or ameliorate adverse reactions to the trigger orders. The physicians could accept or reject these suggestions. RESULTS During the 6-month trial, reminders about corollary orders were presented to 48 intervention physicians and withheld from 41 control physicians. Intervention physicians ordered the suggested corollary orders in 46.3% of instances when they received a reminder, compared with 21.9% compliance by control physicians (p < 0.0001). Physicians discriminated in their acceptance of suggested orders, readily accepting some while rejecting others. There were one third fewer interventions initiated by pharmacists with physicians in the intervention than control groups. CONCLUSION This study demonstrates that physician workstations, linked to a comprehensive electronic medical record, can be an efficient means for decreasing errors of omissions and improving adherence to practice guidelines.


Journal of Bone and Mineral Research | 1997

Universal Standardization of Bone Density Measurements: A Method with Optimal Properties for Calibration Among Several Instruments

Siu L. Hui; Sujuan Gao; Xiao Hua Zhou; C. Conrad Johnston; Ying Lu; Claus C. Glüer; Stephen Grampp; Harry K. Genant

The International Dual‐Photon X‐Ray Absorptiometry (DXA) Standardization Committee (IDSC) conducted a cross‐calibration study among three models of DXA machines from three different manufacturers. In that study, 100 subjects were scanned on all three machines. A set of equations were derived to convert bone mineral density (BMD) on each machine to a “standardized BMD” (sBMD) such that sBMD from the same subject derived from different machines would be approximately the same. In a reanalysis of the cross‐calibration data, we showed that the conversion method used in the IDSC study did not achieve several optimal properties desirable in such conversions. We derived new conversion equations to sBMD based on minimizing differences among sBMD from the three machines. More important is that the new conversions have no residual bias that was present in the IDSC conversions. The performance of the methods were compared on the cross‐calibration data as well as an external data set. We conclude that the IDSC conversions are adequate for clinical use on other machines worldwide, but that researchers should standardize their own machines in a laboratory using the new method.


Journal of General Internal Medicine | 2003

Effects of computerized guidelines for managing heart disease in primary care.

William M. Tierney; J. Marc Overhage; Michael D. Murray; Lisa E. Harris; Xiao Hua Zhou; George J. Eckert; Faye Smith; Nancy A. Nienaber; Clement J. McDonald; Fredric D. Wolinsky

BACKGROUND: Electronic information systems have been proposed as one means to reduce medical errors of commission (doing the wrong thing) and omission (not providing indicated care).OBJECTIVE: To assess the effects of computer-based cardiac care suggestions.DESIGN: A randomized, controlled trial targeting primary care physicians and pharmacists.SUBJECTS: A total of 706 outpatients with heart failure and/or ischemic heart disease.INTERVENTIONS: Evidence-based cardiac care suggestions, approved by a panel of local cardiologists and general internists, were displayed to physicians and pharmacists as they cared for enrolled patients.MEASUREMENTS: Adherence with the care suggestions, generic and condition-specific quality of life, acute exacerbations of their cardiac disease, medication compliance, health care costs, satisfaction with care, and physicians’ attitudes toward guidelines.RESULTS: Subjects were followed for 1 year during which they made 3,419 primary care visits and were eligible for 2,609 separate cardiac care suggestions. The intervention had no effect on physicians’ adherence to the care suggestions (23% for intervention patients vs 22% for controls). There were no intervention-control differences in quality of life, medication compliance, health care utilization, costs, or satisfaction with care. Physicians viewed guidelines as providing helpful information but constraining their practice and not helpful in making decisions for individual patients.CONCLUSIONS: Care suggestions generated by a sophisticated electronic medical record system failed to improve adherence to accepted practice guidelines or outcomes for patients with heart disease. Future studies must weigh the benefits and costs of different (and perhaps more Draconian) methods of affecting clinician behavior.


Biometrics | 1997

Methods for Comparing the Means of Two Independent Log-Normal Samples

Xiao Hua Zhou; Sujuan Gao; Siu L. Hui

Standard methods of using the t-test and the Wilcoxon test have deficiencies for comparing the means of two skewed log-normal samples. In this paper, we propose two new methods to overcome these deficiencies: (1) a likelihood-based approach and (2) a bootstrap-based approach. Our simulation study shows that the likelihood-based approach is the best in terms of the type I error rate and power when data follow a log-normal distribution.


Journal of General Internal Medicine | 1998

Risk of Major Hemorrhage for Outpatients Treated with Warfarin

Deborah A. McMahan; David M. Smith; Mark A. Carey; Xiao Hua Zhou

OBJECTIVE: To determine the incidence of major hemorrhage among outpatients started on warfarin therapy after the recommendation in 1986 for reduced-intensity anticoagulation therapy was made, and to identify baseline patient characteristics that predict those patients who will have a major hemorrhage.DESIGN: Retrospective cohort study.SETTING: A university-affiliated Veterans Affairs Medical Center.PATIENTS: Five hundred seventy-nine patients who were discharged from the hospital after being started on varfarin therapy.MEASUREMENTS AND MAIN RESULTS: The primary outcome variable was major hemorrhage. In our cohort of 579 patients, there were 40 first-time major hemorrhages with only one fatal bleed. The cumulative incidence was 7% at 1 year. The average monthly incidence of major hemorrhage was 0.82% during the first 3 months of treatment and decreased to 0.36% thereafter. Three independent predictors of major hemorrhage were identified: a history of alcohol abuse, chronic renal insufficiency, and a previous gastrointestinal bleed. Age, comorbidities, medications known to influence prothrombin levels, and baseline laboratory values were not associated with major hemorrhage.CONCLUSIONS: The incidence of major hemorrhage in this population of outpatients treated with warfarin was lower than previous estimates of major hemorrhage measured before the recommendation for reduced-intensity anticoagulation therapy was made, but still higher than estimates reported from clinical trials. Alcohol abuse, chronic renal insufficiency, and a previous gastrointestinal bleed were associated with increased risk of major hemorrhage.


Statistics in Medicine | 1997

Confidence intervals for the log-normal mean

Xiao Hua Zhou; Sujuan Gao

In this paper we conduct a stimulation study to evaluate coverage error, interval width and relative bias of four main methods for the construction of confidence intervals of log-normal means: the naive method; Coxs method; a conservative method; and a parametric bootstrap method. The simulation study finds that the naive method is inappropriate, that Coxs method has the smallest coverage error for moderate and large sample sizes, and that the bootstrap method has the smallest coverage error for small sample sizes. In addition, Coxs method produces the smallest interval width among the three appropriate methods. We also apply the four methods to a real data set to contrast the differences.


Journal of General Internal Medicine | 1996

Risk factors for delirium tremens development.

Jeffrey A. Ferguson; Christopher J. Suelzer; George J. Eckert; Xiao Hua Zhou; Robert S. Diffus

OBJECTIVE: To identify clinical characteristics associated with inpatient development of delirium tremens so that future treatment efforts can focus on patients most likely to benefit from aggressive therapy.DESIGN: Retrospective cohort study among patients discharged with diagnoses related to alcohol abuse.SETTING: University-affiliated inner-city hospital.PATIENTS/PARTICIPANTS: Two hundred consecutive patients discharged between June 1991 and August 1992 who underwent evaluation and treatment for alcohol withdrawal or detoxification.MEASUREMENTS AND MAIN RESULTS: Mean age was 41.9 years, 85% were male, 57% were white and 84% were unmarried. Forty-eight (24%) of the patients developed delirium tremens during hospitalization. Bivariate analysis indicated that those who developed delirium tremens were more likely to be African-American, unemployed, and homeless, and were more likely to have gone more days since their last drink, and to have concurrent acute medical illness, high admission blood urea nitrogen level and respiratory rate, and low admission albumin level and systolic blood pressure. In multiple logistic regression analyses, patients who developed delirium tremens were more likely to have gone more days since their last drink (odds ratio [OR] 1.3; 95% confidence interval [CI] 1.09, 1.61) and to have concurrent acute medical illness (OR 5.1; 95% CI 2.07, 12.55). These risk factors were combined for assessment of their ability to predict the occurrence of delirium tremens. If no factors were present, 9% developed delirium tremens; if one factor was present, 25% developed delirium tremens; and if two factors were present, 54% developed delirium tremens.CONCLUSIONS: Inpatient development of delirium tremens was common among patients treated for alcohol detoxification or withdrawal and correlated with several readily available clinical variables.


Pharmacoepidemiology and Drug Safety | 2000

The use of propensity scores in pharmacoepidemiologic research

Susan M. Perkins; Wanzhu Tu; Michael G. Underhill; Xiao Hua Zhou; Michael D. Murray

To describe the application of propensity score analysis in pharmacoepidemiologic research using a study comparing the renal effects of two commonly prescribed non‐steroidal anti‐inflammatory drugs (NSAIDs).


Annals of Internal Medicine | 1997

Methods for comparison of cost data

Xiao Hua Zhou; Catherine A. Melfi; Siu L. Hui

In this era of increasing emphasis on containment of health care costs, the availability of large clinical and administrative databases has facilitated comparisons of the costs of new treatments or policies in health care delivery systems. Therefore, we are seeing more and more cost analyses in the literature. For example, Medicare claims data have been used to study the hospital charges of older adults [1] and variability in patient-level costs for coronary artery bypass graft surgery [2]. Florida Medicaid claims data have also been used to study the effect of a Medicaid home- and community-based waiver program on Medicaid expenditures for persons with AIDS [3]. The Regenstrief Medical Record System, a clinical database, has been used in several studies on the effectiveness of computer reminders about charges of outpatient diagnostic tests [4] and about inpatient charges in an urban hospital [5]. This database was also used to predict inpatient charges [6] and to examine diagnostic charges associated with symptoms of depression in the elderly [7]. The distribution of cost data is often skewed because a small percentage of patients invariably incur extremely high costs relative to most patients. The skewed distribution of cost data is often log-normal; that is, the log-transformed cost data follow a normal distribution [1-3, 5-7]. For the comparison of mean costs between two groups, three commonly used methods are the parametric Student t-test on untransformed costs, the parametric Student t-test on log-transformed costs, and the nonparametric Wilcoxon test. These methods have some limitations in the analysis of skewed cost data. The use of parametric Student t-tests on untransformed costs is based on the assumption that cost is approximately normally distributed; this is especially important in small to moderate samples. Nonparametric Wilcoxon tests are appropriate for testing the equality of two-sample means only when the shape (and thus the variance) of the distribution of cost is the same in both groups. The use of parametric Student t-tests on log-transformed costs assumes that cost has a log-normal distribution. However, testing the equality of means in the log-scale is equivalent to testing the equality of means in the original scale only if variances in the log-scale are equal. The impact of violating these assumptions when applying these common methods to cost data has not been assessed. To overcome these limitations, Zhou and coworkers [8] proposed a Z-score method for comparing the means of log-normally distributed costs between two groups. The purpose of this paper is to examine the effect of this approach on the results and conclusions of previously published studies that compare costs. We identified articles recently published in the medical literature and describe how they analyzed their skewed cost data. Wherever enough information was given, we reanalyzed the data using the Z-score method and observed whether the conclusions were altered. Finally, we make some recommendations about the use of the various tests. Methods Summary of the Z-Score Method When comparing costs between two groups, health services and clinical researchers are often interested in whether mean costs are the same between the two groups. If we denote the mean costs of two groups as M1 and M2, our null hypothesis of interest is H0: M1 M2 = 0. Assume that log-transformed costs in group 1 and group 2 are normally distributed with means micro1 and micro2, respectively, and variances sigma1 2 and sigma2 2, respectively. Because log M1 = micro1 + sigma1 2 and log M2 = micro2 + sigma2 2 (Appendix 1), H0 is equivalent to the following: H0: (micro2 + sigma2 2/2) (micro1 + sigma1 2/2) = 0. The new test [8] is a Z-score based on an estimate of (micro2 + sigma2 2/2) (micro1 + sigma1 2/2) divided by its SE; computation of the Z-score is described in Appendix 2. The Z-score method is appropriate for testing H0 if the cost data are log-normally distributed. It does not ignore the skewness of cost data (as the t-test on untransformed data does), nor does it require equal variances in the original scale (as the Wilcoxon test does) or in the log-scale (as the t-test in the log-scale does). Appendix 1 gives more detail about why the t-test on log-transformed costs is testing a null hypothesis, H0 *: micro1 micro2 = 0, which is different from H0 unless sigma1 = sigma2. In a simulation study, assuming that the cost data are log-normal, Zhou and colleagues [8] showed that when variances in the log-scale are unequal, the Wilcoxon test and the t-tests on both the log-scale and the original scale are all subject to incorrect type I error rates, but the Z-score method has accurate type I error rates and has adequate power. When variances in the log-scale are equal (that is, when H0 and H0 * are equivalent), the performance of the Z-score method is similar to that of the Wilcoxon test and the t-tests on both the log-scale and the original scale. Thus, the Z-score method is always appropriate for testing the equal mean costs of two log-normal samples, regardless of the equality or inequality of variances in the log-scale. Identification of Articles To study the statistical methods used in the literature, we did a MEDLINE search to identify articles published between January 1991 and January 1996 that had the MeSH heading costs and cost analysis (n = 2333). Narrowing the focus to hospital costs (n = 414), we then identified articles that were either in the statistical and numerical subgroup or articles for which hospital costs were the major focus (n = 146). After we eliminated articles not written in English, commentaries, letters, or editorials, 118 articles remained and were reviewed. For each article, we recorded sample sizes, descriptive statistics, and the types of analyses conducted. We also recorded whether the article was only descriptive or inferential and whether cost data were transformed (logarithmically or in another way). If they were, we recorded whether justification for the transformation was provided. For articles in which statistical analysis was done and sufficient detail was provided, we reanalyzed the data by using the method of Zhou and colleagues [8]. Results As the Figure 1 shows, 69 articles (58.5%) were descriptive only and did not include statistical inferences. Most of the other 49 articles described more than one statistical test, regression analysis, or both. Figure 1. Summary of statistical methods used in published articles. Logarithmic transformation of cost data was more likely to be done when regression analysis was used. Of the regression analyses done on costs in 21 articles, 11 used the natural logarithm of costs as the dependent variable; 1 used the Cox semiparametric model to account for censored data; and the other 9 used cost as the dependent variable. The most frequent justification for transforming data was skewness of cost data. When t-test results, Wilcoxon test results, or CIs were presented, the analyses had been done most frequently on untransformed data. Of the 36 articles that described two-sample tests, only 4 transformed the data to the natural logarithm of cost: 3 for two-sample t-tests and 1 for a paired t-test. Untransformed data were used in the other analyses. These analyses included Wilcoxon tests to compare two groups (12 cases), two-sample t-tests (22 cases), and a paired t-test (1 case). All four one-sample (t-distribution) CIs were calculated on untransformed cost data. Eleven articles [9-19] included two-sample tests and contained enough information to allow us to reanalyze the data using the Z-score method. These 11 articles described a total of 23 Wilcoxon tests and 24 t-tests. The Table 1 shows the P values reported in the articles and the P values derived by using the new Z-score method. In some cases, the changes are dramatic and could affect interpretation of the findings. For example, one test (test 16.9) had a P value of 0.16 reported in the article, but the Z-score method produced a P value of 0.001. Table 1. Results of the Reanalysis of 47 Tests If a P value less than 0.05 indicates a statistically significant result, six results (three Wilcoxon test results and three t-test results) were changed enough on reanalysis to alter some conclusions. Specifically, one Wilcoxon test (test 17.1) showed statistically significant results in the article, but reanalysis showed nonsignificant results; two Wilcoxon tests (tests 16.6 and 16.9) showed nonsignificant results in the article, but reanalysis showed statistical significance. For those articles that used t-tests on untransformed cost data, two statistically significant results (on tests 18.1 and 19.1) became nonsignificant in reanalysis; one nonsignificant result (on test 12.2) became statistically significant. Discussion We used the mean to measure the central tendency of skewed cost data. Other measures, such as the median, may be more appropriate for descriptive purposes. However, the bottom line for policy-makers is total cost, which can be derived only from the mean. In our review of the literature, we found that for two-sample comparisons, most investigators used t-tests on untransformed costs or nonparametric Wilcoxon tests; some also used t-tests on log-transformed costs. All three methods have limitations. The validity of the t-test on untransformed costs requires the normality of the cost data, except for large samples. The Wilcoxon test requires equal variances in the original scale, and it is very sensitive to skewness in data [20]. Although the t-test of log-transformed costs adjusts for skewness, it tests the wrong null hypothesis unless variances in the log-scale are equal. The Z-score method [8] was specifically developed to test equality of means in two log-normal samples and explicitly adjusts for skewness in cost data. Despite the limitations of the t-test of untransformed costs and the Wilcoxon test, reanalysis d

Collaboration


Dive into the Xiao Hua Zhou's collaboration.

Top Co-Authors

Avatar

Donna K. McClish

Virginia Commonwealth University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

William M. Tierney

University of Oklahoma Health Sciences Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kang Li

Harbin Medical University

View shared research outputs
Top Co-Authors

Avatar

Yan Hou

Harbin Medical University

View shared research outputs
Top Co-Authors

Avatar

Baojiang Chen

Third Military Medical University

View shared research outputs
Top Co-Authors

Avatar

Gengsheng Qin

Georgia State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge