Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Austin H. Johnson is active.

Publication


Featured researches published by Austin H. Johnson.


Journal of School Psychology | 2011

A Systematic Evaluation of Token Economies as a Classroom Management Tool for Students with Challenging Behavior.

Daniel M. Maggin; Sandra M. Chafouleas; Katelyn M. Goddard; Austin H. Johnson

A two-part systematic review was undertaken to assess the effectiveness of token economies in increasing rates of appropriate classroom behavior for students demonstrating behavioral difficulties. The first part of the review utilized the recently published What Works Clearinghouse (WWC) standards for evaluating single-subject research to determine the extent to which eligible studies demonstrated sufficient evidence to classify the token economy as an evidence-based practice. The second part of the review employed meta-analytic techniques across four different types of effect sizes to evaluate the quantitative strength of the findings. Methodological strengths and weaknesses across the studies were systematically investigated. Results indicated that the extant research on token economies does not provide sufficient evidence to be deemed best-practice based on the WWC criteria.


Exceptionality | 2011

A Quantitative Synthesis of Methodology in the Meta-Analysis of Single-Subject Research for Students with Disabilities: 1985-2009.

Daniel M. Maggin; Breda V. O'Keeffe; Austin H. Johnson

The purpose of this review was to examine the methods used to conduct meta-analyses of single-subject research involving students with and at-risk for disabilities. Specifically, the procedures used for preparing, aggregating, analyzing, and evaluating single-subject data across 68 primary syntheses were examined. In addition to these methodological and reporting issues, the present review also considered various characteristics of syntheses to determine their overall prevalence and focus. Results of the review indicated that the publication rate of single-subject meta-analyses has increased considerably in recent years, focusing equally on students with high- and low-incidence disabilities. This review revealed considerable variability in the methods and procedures used to synthesize single-subject research. Based on these findings, suggestions for future single-subject meta-analyses were made.


Journal of School Psychology | 2012

A systematic evidence review of school-based group contingency interventions for students with challenging behavior ☆

Daniel M. Maggin; Austin H. Johnson; Sandra M. Chafouleas; Laura M. Ruberto; Melissa Berggren

The purpose of this review was to synthesize the research underlying group contingency interventions to determine whether there is sufficient evidence to support their use for managing the classroom behavior of students with behavioral difficulties. An application of the What Works Clearinghouse (WWC) procedures for evaluating single-subject research revealed that the research investigating group contingencies demonstrated sufficient rigor, evidence, and replication to label the intervention as evidence-based. These findings were further supported across five quantitative indices of treatment effect. The results associated with the application of the WWC procedures and quantitative evaluations were supplemented with additional systematic coding of methodological features and study characteristics to evaluate the populations and conditions under which the effects of the group contingency best generalize. Findings associated with this coding revealed that the lack of detailed reporting across studies limited our ability to determine for whom and under what conditions group contingencies are best suited.


Exceptional Children | 2015

Is Performance Feedback for Educators an Evidence-Based Practice? A Systematic Review and Evaluation Based on Single-Case Research:

Lindsay M. Fallon; Melissa A. Collier-Meek; Daniel M. Maggin; Lisa M. Hagermoser Sanetti; Austin H. Johnson

Optimal levels of treatment fidelity, a critical moderator of intervention effectiveness, are often difficult to sustain in applied settings. It is unknown whether performance feedback, a widely researched method for increasing educators’ treatment fidelity, is an evidence-based practice. The purpose of this review was to evaluate the current research on performance feedback as a strategy to promote the implementation of school-based practices. Studies were evaluated according to What Works Clearinghouse (WWC; Kratochwill et al., 2010) technical guidelines for single-case design, utilizing both the design and evidence standards to determine whether studies provided sufficient evidence for the effectiveness of performance feedback. Results indicate that performance feedback can be termed an evidence-based intervention based on criteria set by the WWC. Implications for future research are described.


Education and Treatment of Children | 2014

A Meta-Analytic Evaluation of the FRIENDS Program for Preventing Anxiety in Student Populations

Daniel M. Maggin; Austin H. Johnson

The purpose of this review was to evaluate the methodological strength and overall effectiveness of the research underlying the FRIENDS program for preventing anxiety in students at low and elevated risk for developing anxiety disorders. Meta-analytic findings provided mixed results, with low-risk students exposed to the program having demonstrated small improvements over comparisons for immediate posttest measures of anxiety. Findings drawn from follow-up data collection periods indicated that low-risk students sustained initial gains on anxiety over 12 months but not beyond. In addition, no immediate posttest difference was found between students at elevated risk on measures of anxiety. These findings are discussed in terms of practical and methodological limitations of the body of research.


Remedial and Special Education | 2017

Functional Assessment–Based Interventions for Students With or At-Risk for High-Incidence Disabilities: Field Testing Single-Case Synthesis Methods:

Eric Alan Common; Kathleen Lynne Lane; James E. Pustejovsky; Austin H. Johnson; Liane Elizabeth Johl

This systematic review investigated one systematic approach to designing, implementing, and evaluating functional assessment–based interventions (FABI) for use in supporting school-age students with or at-risk for high-incidence disabilities. We field tested several recently developed methods for single-case design syntheses. First, we appraised the quality of individual studies and the overall body of work using Council for Exceptional Children’s standards. Next, we calculated and meta-analyzed within-case and between-case effect sizes. Results indicated that studies were of high methodological quality, with nine studies identified as being methodologically sound and demonstrating positive outcomes across 14 participants. However, insufficient evidence was available to classify the evidence base for FABIs due to small number of participants within (fewer than recommended three) and across (fewer than recommended 20) studies. Nonetheless, average within-case effect sizes were equivalent to increases of 118% between baseline and intervention phases. Finally, potential moderating variables were examined. Limitations and future directions are discussed.


Remedial and Special Education | 2017

A Meta-Analysis of School-Based Group Contingency Interventions for Students with Challenging Behavior: An Update.

Daniel M. Maggin; James E. Pustejovsky; Austin H. Johnson

Group contingencies are recognized as a potent intervention for addressing challenging student behavior in the classroom, with research reviews supporting the use of this intervention platform going back more than four decades. Over this time period, the field of education has increasingly emphasized the role of research evidence for informing practice, as reflected in the increased use of systematic reviews and meta-analyses. In the current article, we continue this trend by applying recently developed between-case effect size measures and transparent visual analysis procedures to synthesize an up-to-date set of group contingency studies that used single-case designs. Results corroborated recent systematic reviews by indicating that group contingencies are generally effective—particularly for addressing challenging behavior in general education classrooms. However, our review highlights the need for more research on students with disabilities and the need to collect and report information about participants’ functional level.


School Psychology Quarterly | 2017

Dependability of Data Derived From Time Sampling Methods With Multiple Observation Targets.

Austin H. Johnson; Sandra M. Chafouleas; Amy M. Briesch

In this study, generalizability theory was used to examine the extent to which (a) time-sampling methodology, (b) number of simultaneous behavior targets, and (c) individual raters influenced variance in ratings of academic engagement for an elementary-aged student. Ten graduate-student raters, with an average of 7.20 hr of previous training in systematic direct observation and 58.20 hr of previous direct observation experience, scored 6 videos of student behavior using 12 different time-sampling protocols. Five videos were submitted for analysis, and results for observations using momentary time-sampling and whole-interval recording suggested that the majority of variance was attributable to the rating occasion, although results for partial-interval recording generally demonstrated large residual components comparable with those seen in prior research. Dependability coefficients were above .80 when averaging across 1 to 2 raters using momentary time-sampling, and 2 to 3 raters using whole-interval recording. Ratings derived from partial-interval recording needed to be averaged over 3 to 7 raters to demonstrate dependability coefficients above .80.


Preventing School Failure | 2015

The Reporting of Core Program Components: An Overlooked Barrier for Moving Research Into Practice

Daniel M. Maggin; Austin H. Johnson

The successful implementation of school-based behavioral interventions requires school personnel to be competent with program content and procedures. An unfortunate trend within school-based behavioral intervention research is that the core intervention components and implementation features are often not fully described. Without clear descriptions of these critical elements, it is difficult for school personnel to successfully identify, adopt, and implement research-based programs. The practical implications that study reporting has on the integration of research into practice is illustrated in the present study through a systematic review of a widely researched anxiety prevention program. Results indicated that study authors often did not provide sufficient detail on the intervention components used and whether those components were implemented with sufficient levels of treatment fidelity. Moreover, the supports used to facilitate the implementation of the intervention varied widely across studies. These findings are discussed in relation to evidence suggesting that descriptions of the independent variable are important for identifying the mechanisms of adult and student behavior change while reports of implementation features are needed to ensure school personnel are able to consider the feasibility of implementing research-based programs within applied settings. The authors conclude by describing implications for both research and practice.


Assessment for Effective Intervention | 2015

Using Consensus Building Procedures With Expert Raters to Establish Comparison Scores of Behavior for Direct Behavior Rating

Rose Jaffery; Austin H. Johnson; Mark C. Bowler; T. Chris Riley-Tillman; Sandra M. Chafouleas; Sayward E. Harrison

To date, rater accuracy when using Direct Behavior Rating (DBR) has been evaluated by comparing DBR-derived data to scores yielded through systematic direct observation. The purpose of this study was to evaluate an alternative method for establishing comparison scores using expert-completed DBR alongside best practices in consensus building exercises, to evaluate the accuracy of ratings. Standard procedures for obtaining expert data were established and implemented across two sites. Agreement indices and comparison scores were derived. Findings indicate that the expert consensus building sessions resulted in high agreement between expert raters, lending support for this alternative method for identifying comparison scores for behavioral data.

Collaboration


Dive into the Austin H. Johnson's collaboration.

Top Co-Authors

Avatar

Daniel M. Maggin

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Melissa A. Collier-Meek

University of Massachusetts Boston

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James E. Pustejovsky

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge