Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Debbie L. Hahs-Vaughn is active.

Publication


Featured researches published by Debbie L. Hahs-Vaughn.


Journal of Experimental Education | 2005

A Primer for Using and Understanding Weights With National Datasets

Debbie L. Hahs-Vaughn

Using data from the National Study of Postsecondary Faculty and the Early Childhood Longitudinal Study—Kindergarten Class of 1998-99, the author provides guidelines for incorporating weights and design effects in single-level analysis using Windows-based SPSS and AM software. Examples of analyses that do and do not employ weights and design effects are also provided to illuminate the differential results of key parameter estimates and standard errors using varying degrees of using or not using the weighting and design effect continuum. The author gives recommendations on the most appropriate weighting options, with specific reference to employing a strategy to accommodate both oversampled groups and cluster sampling (i.e., using weights and design effects) that leads to the most accurate parameter estimates and the decreased potential of committing a Type I error. However, using a design effect adjusted weight in SPSS may produce underestimated standard errors when compared with accurate estimates produced by specialized software such as AM.


Journal of Experimental Education | 2006

Estimating and Using Propensity Score Analysis With Complex Samples

Debbie L. Hahs-Vaughn; Anthony J. Onwuegbuzie

Propensity score analysis is one statistical technique that can be applied to observational data to mimic randomization and thus can be used to estimate causal effects in studies in which the researchers have not applied randomization. In this article the authors (a) describe propensity score methodology and (b) demonstrate its application using elementary student data from the Early Childhood Longitudinal Study-Kindergarten Class of 1998-99 (ECLS-K). The authors also discuss methodological considerations that need to be addressed when using data from complex samples as in this analysis. Furthermore, the authors provide a tutorial that can be used by researchers to understand the methodology behind and to emulate the steps of conducting propensity score analysis.


British Journal of Educational Technology | 2007

Combined longitudinal effects of attitude and subjective norms on student outcomes in a web-enhanced course: A structural equation modelling approach

Stephen A. Sivo; Cheng‐Chang ‘Sam’ Pan; Debbie L. Hahs-Vaughn

Factors affecting the student use of a course management system in a web-enhanced course are investigated using the technology acceptance model. Represented in the present study is the second phase of the analysis, with a focus on the causal relationship of subjective norms to student attitudes towards WebCT and their effect on three dependent variables: (1) end-of-course grades; (2) online frequency; and (3) future preference to take a web-enhanced class over a face-to-face course. Structural equation modelling was used to analyse the measures of the two independent variables across three time periods. Findings suggest that the proposed model fits the data well and that it is useful to model student attitudes and perceived social pressure to understand certain student outcomes. Further recommendations for researchers and practitioners are addressed.


International Journal of Research & Method in Education | 2006

Analysis of data from complex samples

Debbie L. Hahs-Vaughn

Oversampling and cluster sampling must be addressed when analyzing complex sample data. This study: (a) compares parameter estimates when applying weights versus not applying weights; (b) examines subset selection issues; (c) compares results when using standard statistical software (SPSS) versus specialized software (AM); and (d) offers recommendations for analyzing complex sample data. Underestimated standard errors and overestimated test statistics were produced when both the oversampled and cluster sample characteristics of the data were ignored. Regarding subset analysis, marked differences were not evident in SPSS results, but the standard errors of the weighted versus unweighted models became more similar as smaller subsets of the data were extracted using AM. Recommendations to researchers are provided including accommodating both oversampling and cluster sampling.


International Journal of Language & Communication Disorders | 2016

Joint attention interventions for children with autism spectrum disorder: a systematic review and meta-analysis.

Kimberly A. Murza; Jamie Schwartz; Debbie L. Hahs-Vaughn; Chad Nye

BACKGROUND A core social-communication deficit in children with autism spectrum disorder (ASD) is limited joint attention behaviours-important in the diagnosis of ASD and shown to be a powerful predictor of later language ability. Various interventions have been used to train joint attention skills in children with ASD. However, it is unclear which participant, intervention and interventionist factors yield more positive results. AIMS The purpose of this systematic review and meta-analysis was to provide a quantitative assessment of the effectiveness of joint attention interventions aimed at improving joint attention abilities in children with ASD. METHODS & PROCEDURES The researchers searched six databases for studies meeting the inclusion criteria at two levels: title/abstract and full-text stages. Two independent coders completed data extraction using a coding manual and form developed specifically for this research study. Meta-analysis procedures were used to determine the overall effects of several comparisons including treatment type, treatment administrator, intervention characteristics and follow-up. MAIN CONTRIBUTION Fifteen randomized experimental studies met inclusion criteria. All comparisons resulted in statistically significant effects, though overlapping confidence intervals suggest that none of the comparisons were statistically different from each other. Specifically, treatment administrator, dosage and design (control or comparison, etc.) characteristics of the studies do not appear to produce significantly different effects. CONCLUSIONS & IMPLICATIONS The results of this meta-analysis provide strong support for explicit joint attention interventions for young children with ASD; however, it remains unclear which children with ASD respond to which type of intervention.


Journal of Experimental Education | 2006

Utilization of Sample Weights in Single Level Structural Equation Modeling

Debbie L. Hahs-Vaughn; Richard G. Lomax

ABSTRACT. Complex survey designs often employ multistage cluster sampling designs and oversample particular units to ensure more accurate population parameter estimates. These issues must be accommodated in the analysis to ensure accurate parameter estimation. Incorporation of sample weights in some statistical procedures has been studied. However, research on the behavior of sample weights on estimates, standard errors, and fit measures in latent variable models is negligible, and studies examining methodology on latent variable modeling applications using extant data are rare. Using the Beginning Postsecondary Students Longitudinal Study 1990/92/94, the authors found, with mixed results, that a statistically significant difference exists in estimates and fit indices when weights and designs are applied versus when they are ignored.


Teaching in Higher Education | 2007

Changes in Student-Centred Assessment by Postsecondary Science and Non-Science Faculty.

Karen L. Yanowitz; Debbie L. Hahs-Vaughn

Although many appeals for reform include adopting more student-centred assessment, few studies have examined the postsecondary classroom. Using the 1993 and 1999 National Study of Postsecondary Faculty, the results of the current study revealed that faculty in the sciences were less likely to use student-centred assessment practices than faculty in non-sciences. Additionally, while faculty in the non-sciences showed a significant increase in their use of student-centred assessment between the two waves of data collection, no such increase was obtained for faculty in the sciences. Results are discussed in terms of public policy.


Evaluation Review | 2011

Methodological considerations in using complex survey data: an applied example with the Head Start Family and Child Experiences Survey.

Debbie L. Hahs-Vaughn; Christine M. McWayne; Rebecca J. Bulotsky-Shearer; Xiaoli Wen; Ann Marie Faria

Complex survey data are collected by means other than simple random samples. This creates two analytical issues: nonindependence and unequal selection probability. Failing to address these issues results in underestimated standard errors and biased parameter estimates. Using data from the nationally representative Head Start Family and Child Experiences Survey (FACES; 1997 and 2000 cohorts), three diverse multilevel models are presented that illustrate differences in results depending on addressing or ignoring the complex sampling issues. Limitations of using complex survey data are reported, along with recommendations for reporting complex sample results.


Evidence-based Communication Assessment and Intervention | 2008

Understanding high quality research designs for speech language pathology

Debbie L. Hahs-Vaughn; Chad Nye

As innovative methods, strategies, or curriculum are introduced to assist clients with speech and language disorders, many Speech-Language Pathologists (SLPs) may question the effectiveness of the intervention and more specifically whether the results that they are seeing are the result of the intervention (i.e., cause/effect). Several research designs allow researchers to examine causality including the most widely known, the randomized controlled trial (RCT). While not all situations are suited to applying the RCT, other high quality designs may be used that still lend evidence of causality even when randomization is not possible. The purpose of this paper is to provide a brief summary and illustrations of randomized controlled trials (RCT) and quasi-experimental design (QED) that are appropriate for the study of treatment effectiveness in speech-language pathology research, present potential barriers to quality randomization, and provide guidelines to help identify RCTs.


International Journal of Speech-Language Pathology | 2011

Assessing methodological quality of randomized and quasi-experimental trials: A summary of stuttering treatment research

Chad Nye; Debbie L. Hahs-Vaughn

The purpose of this study is to provide a detailed analysis of the methodological quality of experimental and quasi-experimental group designed studies in the area of stuttering intervention. A total of 23 randomized controlled trials (RCT) and quasi-experimental studies of treatment in the area of stuttering were identified and retrieved from an electronic search of nine databases and 13 individual journals. Using the Downs and Black Checklist each study was coded for reporting, external validity, internal validity, and internal validity confounding. Results of the coding indicated that while overall reporting was reasonably complete, the quality of the external and internal validity scores was found to be substantively incomplete. This lack of clarity and completeness of reporting issues related to the external and internal validity makes the interpretation of the findings of individual study results problematic and seriously effects the replicability of the individual study. Implications of these findings are suggested for both researchers and clinicians.

Collaboration


Dive into the Debbie L. Hahs-Vaughn's collaboration.

Top Co-Authors

Avatar

Chad Nye

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Jamie Schwartz

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiaoli Wen

National Louis University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ann Marie Faria

American Institutes for Research

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kimberly A. Murza

University of Northern Colorado

View shared research outputs
Researchain Logo
Decentralizing Knowledge