Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Faith G. Miller is active.

Publication


Featured researches published by Faith G. Miller.


School Psychology Quarterly | 2015

A comparison of measures to screen for social, emotional, and behavioral risk.

Faith G. Miller; Daniel Cohen; Sandra M. Chafouleas; T. Chris Riley-Tillman; Megan E. Welsh; Gregory A. Fabiano

The purpose of this study was to examine the relation between teacher-implemented screening measures used to identify social, emotional, and behavioral risk. To this end, 5 screening options were evaluated: (a) Direct Behavior Rating - Single Item Scales (DBR-SIS), (b) Social Skills Improvement System - Performance Screening Guide (SSiS), (c) Behavioral and Emotional Screening System - Teacher Form (BESS), (d) Office discipline referrals (ODRs), and (e) School nomination methods. The sample included 1974 students who were assessed tri-annually by their teachers (52% female, 93% non-Hispanic, 81% white). Findings indicated that teacher ratings using standardized rating measures (DBR-SIS, BESS, and SSiS) resulted in a larger proportion of students identified at-risk than ODRs or school nomination methods. Further, risk identification varied by screening option, such that a large percentage of students were inconsistently identified depending on the measure used. Results further indicated weak to strong correlations between screening options. The relation between broad behavioral indicators and mental health screening was also explored by examining classification accuracy indices. Teacher ratings using DBR-SIS and SSiS correctly identified between 81% and 91% of the sample as at-risk using the BESS as a criterion. As less conservative measures of risk, DBR-SIS and SSiS identified more students as at-risk relative to other options. Results highlight the importance of considering the aims of the assessment when selecting broad screening measures to identify students in need of additional support.


Behavioral Disorders | 2014

Teacher Perceptions of the Usability of School-Based Behavior Assessments

Faith G. Miller; Sandra M. Chafouleas; T. Chris Riley-Tillman; Gregory A. Fabiano

Teacher perceptions of school-based behavior assessments were assessed over the course of a school year. Specifically, the utility and relevance of Direct Behavior Ratings–Single Item Scales, a hybrid direct observation method, relative to two school-based behavioral rating scales, the Social Skills Improvement System–Performance Screening Guide and the Behavioral and Emotional Screening System–Teacher Form, were examined. Participants included 65 teachers who completed the Usage Rating Profile-Assessment on each measure after three assessment periods (fall, winter, and spring). Results indicated that although overall usability ratings did not differ, factor scores differed as a function of both measure and assessment period. Implications for practice and directions for future research are discussed.


Assessment for Effective Intervention | 2017

Progress Monitoring the Effects of Daily Report Cards across Elementary and Secondary Settings Using Direct Behavior Rating: Single Item Scales.

Faith G. Miller; Nicholas J. Crovello; Sandra M. Chafouleas

Direct Behavior Rating–Single Item Scales (DBR-SIS) have been advanced as a promising, systematic, behavioral, progress-monitoring method that is flexible, efficient, and defensible. This study aimed to extend existing literature on the use of DBR-SIS in elementary and secondary settings, and to examine methods of monitoring student progress in response to intervention. To this end, two concurrent multiple baseline design studies were conducted in a diverse magnet school district located in the northeastern United States. One study was conducted with four students in kindergarten and first grade, whereas the second study was conducted with three students in 10th and 11th grade. Response to a Daily Report Card (DRC) intervention was monitored using two different approaches: DBR-SIS and systematic direct observation (SDO) probes. Across all participants, modest improvements in behavior were observed using both visual and quantitative analyses of DBR-SIS data; however, decisions regarding student response to the intervention varied as a function of the dependent variable analyzed. Implications for practice and future directions for research are discussed.


Journal of School Psychology | 2017

Getting “SMART” about implementing multi-tiered systems of support to promote school mental health

Gerald J. August; Timothy F. Piehler; Faith G. Miller

With the growing adoption and implementation of multi-tiered systems of support (MTSS) in school settings, there is increasing need for rigorous evaluations of adaptive-sequential interventions. That is, MTSS specify universal, selected, and indicated interventions to be delivered at each tier of support, yet few investigations have empirically examined the continuum of supports that are provided to students both within and across tiers. This need is compounded by a variety of prevention approaches that have been developed with distinct theoretical foundations (e.g., Positive Behavioral Interventions and Supports, Social-Emotional Learning) that are available within and across tiers. As evidence-based interventions continue to flourish, school-based practitioners greatly need evaluations regarding optimal treatment sequencing. To this end, we describe adaptive treatment strategies as a natural fit within the MTSS framework. Specifically, sequential multiple assignment randomized trials (SMART) offer a promising empirical approach to rigorously develop and compare adaptive treatment regimens within this framework.


Assessment for Effective Intervention | 2017

Direct Behavior Rating Instrumentation Evaluating the Impact of Scale Formats

Faith G. Miller; T. Chris Riley-Tillman; Sandra M. Chafouleas; Alyssa A. Schardt

The purpose of this study was to investigate the impact of two different Direct Behavior Rating–Single Item Scale (DBR-SIS) formats on rating accuracy. A total of 119 undergraduate students participated in one of two study conditions, each utilizing a different DBR-SIS scale format: one that included percentage of time anchors on the DBR-SIS scale and an explicit reference to duration of the target behavior (percent group) and one that did not include percentage anchors nor a reference to duration of the target behavior (no percent group). Participants viewed nine brief video clips and rated student behavior using one of the two DBR-SIS formats. Rating accuracy was determined by calculating the absolute difference between participant ratings and two criterion measures: systematic direct observation scores and DBR-SIS expert ratings. Statistically significant differences between groups were found on only two occasions, pertaining to ratings of academically engaged behavior. Limitations and directions for future research are discussed.


Journal of Research in Childhood Education | 2016

The Kindergarten Transition: Behavioral Trajectories in the First Formal Year of School

Megan E. Welsh; Faith G. Miller; Janice Kooken; Sandra M. Chafouleas; D. Betsy McCoach

ABSTRACT During the first year of school, student success hinges on learning not only new academic skills, but also behavioral expectations and developing self-regulation skills in the classroom. The purpose of this study was to investigate development in behavioral regulation during the kindergarten year. The authors used multilevel models to explore the association between sex and risk status on academic engagement and disruptive behavior trajectories based on daily teacher-generated behavioral ratings obtained for 22 full-day kindergarten students (six at risk, 12 male) across 80 school days. Results indicated that, in general, behavior improved over time; academic engagement increased whereas disruptive behavior decreased. However, behavioral trajectories varied as a function of sex and risk status. Implications for practice and directions for future research are discussed.


Psychological Assessment | 2017

Test order in teacher-rated behavior assessments: Is counterbalancing necessary?

Janice Kooken; Megan E. Welsh; D. Betsy McCoach; Faith G. Miller; Sandra M. Chafouleas; T. Chris Riley-Tillman; Gregory A. Fabiano

Counterbalancing treatment order in experimental research design is well established as an option to reduce threats to internal validity, but in educational and psychological research, the effect of varying the order of multiple tests to a single rater has not been examined and is rarely adhered to in practice. The current study examines the effect of test order on measures of student behavior by teachers as raters utilizing data from a behavior measure validation study. Using multilevel modeling to control for students nested within teachers, the effect of rating an earlier measure on the intercept or slope of a later behavior assessment was statistically significant in 22% of predictor main effects for the spring test period. Test order effects had potential for high stakes consequences with differences large enough to change risk classification. Results suggest that researchers and practitioners in classroom settings using multiple measures evaluate the potential impact of test order. Where possible, they should counterbalance when the risk of an order effect exists and report justification for the decision to not counterbalance.


Journal of School Psychology | 2018

Methods matter: A multi-trait multi-method analysis of student behavior

Faith G. Miller; Austin H. Johnson; Huihui Yu; Sandra M. Chafouleas; D. Betsy McCoach; T. Chris Riley-Tillman; Gregory A. Fabiano; Megan E. Welsh

Reliable and valid data form the foundation for evidence-based practices, yet surprisingly few studies on school-based behavioral assessments have been conducted which implemented one of the most fundamental approaches to construct validation, the multitrait-multimethod matrix (MTMM). To this end, the current study examined the reliability and validity of data derived from three commonly utilized school-based behavioral assessment methods: Direct Behavior Rating - Single Item Scales, systematic direct observations, and behavior rating scales on three common constructs of interest: academically engaged, disruptive, and respectful behavior. Further, this study included data from different sources including student self-report, teacher report, and external observers. A total of 831 students in grades 3-8 and 129 teachers served as participants. Data were analyzed using bivariate correlations of the MTMM, as well as single and multi-level structural equation modeling. Results suggested the presence of strong methods effects for all the assessment methods utilized, as well as significant relations between constructs of interest. Implications for practice and future research are discussed.


Journal of School Psychology | 2017

Initial development and evaluation of the student intervention matching (SIM) form

Faith G. Miller; Clayton R. Cook; Yanchen Zhang

There is currently a large gap in both research and practice between student identification practices for those at-risk (i.e., universal screening, teacher referral, or extant data as early identification methods) and the selection of appropriate Tier 2 interventions for social, emotional, and behavioral concerns. The purpose of this study was to develop and test the treatment validity of the Student Intervention Matching (SIM) Form, an intervention matching protocol designed for use at Tier 2. To this end, single-case design methodology was employed to systematically evaluate outcomes associated with use of the SIM Form in the intervention selection process. Participants included eight elementary-age students arranged in sets of four student dyads. A multiple baseline design was used in order to examine the relative effectiveness of matched interventions according to the SIM Form, and mismatched interventions according to the SIM Form. Results indicated that interventions matched using the SIM Form were functionally related to improved student outcomes across a variety of dependent variables when compared to mismatched phases. Implications for practice and future research are discussed.


Assessment for Effective Intervention | 2017

Direct Behavior Ratings: A Feasible and Effective Progress Monitoring Approach for Social and Behavioral Interventions:

Faith G. Miller; Gregory A. Fabiano

Despite significant advancements in supporting students’ social, emotional, and behavioral health in recent years, progress remains hindered by limited formative assessment options. The focus of this special issue is on applications for Direct Behavior Rating (DBR), which is a hybrid assessment method incorporating procedures utilized in direct observations and teacher ratings. Three articles in the special issue utilize multiple baseline designs to demonstrate the utility and effectiveness of DBR single-item scales, and a fourth article describes a multiple-item DBR applied to social behaviors. Expert commentaries provide perspectives on the current literature related to the assessment of social, emotional, and behavioral constructs in school settings and also chart a future course for continued study.

Collaboration


Dive into the Faith G. Miller's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

T. Chris Riley-Tillman

State University of New York System

View shared research outputs
Top Co-Authors

Avatar

Megan E. Welsh

University of Connecticut

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David L. Lee

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Janice Kooken

University of Connecticut

View shared research outputs
Researchain Logo
Decentralizing Knowledge