Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Crystal N. Taylor is active.

Publication


Featured researches published by Crystal N. Taylor.


Assessment for Effective Intervention | 2016

Technical Adequacy of the Social, Academic, and Emotional Behavior Risk Screener in an Elementary Sample

Stephen P. Kilgus; Wesley A. Sims; Nathaniel P. von der Embse; Crystal N. Taylor

The purpose of this study was to evaluate the psychometric defensibility of the Social, Academic, and Emotional Behavior Risk Screener (SAEBRS): a quick and easy universal screener for behavioral and emotional risk. Elementary school teachers completed the SAEBRS with 346 students in Grades 3 to 5. Teachers also completed two criterion measures, including the Student Risk Screening Scale (SRSS) and the Student Internalizing Behavior Screener (SIBS). Additional extant behavioral and academic data sources were collected including office discipline referrals, suspensions, curriculum-based measurement scores, and statewide achievement test scores. Reliability analyses were indicative of the internal consistency of all four SAEBRS scales, whereas correlational analyses and Mann–Whitney–Wilcoxon tests supported the criterion-related and construct validity. Receiver operating characteristic curve analyses suggested each SAEBRS scale was associated with acceptable or optimal diagnostic accuracy. However, cut scores selected as most appropriate within each SAEBRS scale were found to differ from those identified in previous studies, potentially suggesting the influence of criterion outcome under consideration on SAEBRS diagnostic accuracy. Limitations and future directions for research are discussed, with emphasis on the need for continued examination of the extent of variability in SAEBRS cut score performance.


School Psychology Quarterly | 2017

Use of direct behavior ratings to collect functional assessment data.

Stephen P. Kilgus; Jennifer S. Kazmerski; Crystal N. Taylor; Nathaniel P. von der Embse

The purpose of this investigation was to evaluate the utility of Direct Behavior Rating Single Item Scale (DBR-SIS) methodology in collecting functional behavior assessment data. Specific questions of interest pertained to the evaluation of the accuracy of brief DBR-SIS ratings of behavioral consequences and determination of the type of training necessary to support such accuracy. Undergraduate student participants (N = 213; 62.0% male; 62.4% White) viewed video clips of students in a classroom setting, and then rated both disruptive behavior and 4 consequences of that behavior (i.e., adult attention, peer attention, escape/avoidance, and access to tangibles/activities). Results indicated training with performance feedback was necessary to support the generation of accurate disruptive behavior and consequence ratings. Participants receiving such support outperformed students in training-only, pretest–posttest, and posttest-only groups for disruptive behavior and all 4 DBR-SIS consequence targets. Future directions for research and implications for practice are discussed, including how teacher ratings may be collected along with other forms of assessment (e.g., progress monitoring) within an efficient Tier 2 assessment model.


School Psychology Quarterly | 2017

Meta-analysis of the effects of academic interventions and modifications on student behavior outcomes.

Kristy Warmbold-Brann; Matthew K. Burns; June L. Preast; Crystal N. Taylor; Lisa Aguilar

The current study examined the effect of academic interventions and modifications on behavioral outcomes in a meta-analysis of 32 single-case design studies. Academic interventions included modifying task difficulty, providing instruction in reading, mathematics, or writing, and contingent reinforcement for academic performance. There was an overall small to moderate effect (ϕ = .56) on behavioral outcomes, with a stronger effect on increasing time on task (ϕ = .64) than on decreasing disruptive behavior (ϕ = .42). There was a small effect for using a performance-based contingent reinforcer (ϕ = .48). Interventions completed in an individual setting resulted in a moderate to large effects on behavior outcomes. Results of the current meta-analysis suggest that academic interventions can offer both positive academic and behavioral outcomes. Practical implications and suggestions for future research are included.


School Psychology Quarterly | 2018

Screening for Behavioral Risk: Identification of High Risk Cut Scores within the Social, Academic, and Emotional Behavior Risk Screener (SAEBRS).

Stephen P. Kilgus; Crystal N. Taylor; Nathaniel P. von der Embse

The purpose of this study was to support the identification of Social, Academic, and Emotional Behavior Risk Screener (SAEBRS) cut scores that could be used to detect high-risk students. Teachers rated students across two time points (Time 1 n = 1,242 students; Time 2 n = 704) using the SAEBRS and the Behavioral and Emotional Screening System (BESS), the latter of which served as the criterion measure. Exploratory receiver operating characteristic (ROC) curve analyses of Time 1 data detected cut scores evidencing optimal levels of specificity and borderline-to-optimal levels of sensitivity. Cross-validation analyses of Time 2 data confirmed the performance of these cut scores, with all but one scale evidencing similar performance. Findings are considered particularly promising for the SAEBRS Total Behavior scale in detecting high-risk students.


School Psychology Quarterly | 2018

Diagnostic accuracy of a universal screening multiple gating procedure: A replication study.

Stephen P. Kilgus; Nathaniel P. von der Embse; Crystal N. Taylor; Michael P. Van Wie; Wesley A. Sims

The purpose of this diagnostic accuracy study was to evaluate the sensitivity and specificity (among other indicators) of three universal screening approaches, including the Social, Academic, and Emotional Behavior Risk Screener (SAEBRS), a SAEBRS-based teacher nomination tool, and a multiple gating procedure (MGP). Each screening approach was compared to the BASC-2 Behavioral and Emotional Screening System (BESS), which served as a criterion indicator of student social-emotional and behavioral risk. All data were collected in a concurrent fashion. Participants included 704 students (47.7% female) from four elementary schools within the Midwestern United States (21.6% were at risk per the BESS). Findings yielded support for the SAEBRS, with sensitivity = .93 (95% confidence interval [.89–.97]), specificity = .91 (.89–.93), and correct classification = .92. Findings further supported the MGP, which yielded sensitivity = .81 (.74–.87), specificity = .93 (.91–.95), and correct classification = .91. In contrast, the teacher nomination tool yielded questionable levels of diagnostic accuracy (sensitivity = .86 [.80–.91], specificity = .74 [.70–.78], and correct classification = .76). Overall, findings were particularly supportive of SAEBRS diagnostic accuracy, suggesting the MGP might also serve as an acceptable approach to universal screening. Other implications for practice and directions for future research are discussed.


Remedial and Special Education | 2018

Examining SAEBRS Technical Adequacy and the Moderating Influence of Criterion Type on Cut Score Performance

Stephen P. Kilgus; Nathaniel P. von der Embse; Amanda N. Allen; Crystal N. Taylor; Katie Eklund

The purpose of this study was to evaluate the internal consistency reliability, validity, and diagnostic accuracy of Social, Academic, and Emotional Behavior Risk Screener–Teacher Rating Scale (SAEBRS) scores. Teachers (n = 68) universally screened 1,242 elementary students using two measures: the SAEBRS and the Behavioral and Emotional Screening System (BESS). Multilevel analyses indicated that although SAEBRS scores were internally consistent at the overall level, reliability suffered for certain SAEBRS scores at the between-group (classroom) level. Multilevel correlational analyses revealed moderate-to-large and statistically significant relations between SAEBRS and BESS scores at the overall, between-group, and within-group levels. Follow-up Fisher’s z tests revealed a pattern of convergent and discriminant relations in accordance with theory-driven expectations. Receiver operating characteristic (ROC) curve analyses supported the diagnostic accuracy of each SAEBRS scale. Further examination of findings relative to prior research suggested SAEBRS diagnostic accuracy is moderated by the type of criterion measure under consideration.


Journal of Emotional and Behavioral Disorders | 2018

The Student Risk Screening Scale: A Reliability and Validity Generalization Meta-Analysis

Stephen P. Kilgus; Katie Eklund; Daniel M. Maggin; Crystal N. Taylor; Amanda N. Allen

The purpose of this study was to conduct reliability and validity generalization meta-analyses of evidence regarding the Student Risk Screening Scale (SRSS), a universal screener for externalizing behavior problems. A systematic review of the literature resulted in the identification of 17 studies inclusive of evidence regarding SRSS score (a) internal consistency reliability (i.e., alpha coefficients), and/or (b) criterion-related validity (e.g., correlations between the SRSS and various outcomes). Multilevel meta-analyses indicated that across studies, SRSS scores were associated with adequate internal consistency (α = .83). Analyses further suggested the SRSS was a valid indicator of both social and behavioral outcomes (r = .52) and academic outcomes (r = .42). Follow-up analyses suggested that in accordance with theory-driven expectations, the SRSS was a stronger indicator of externalizing problems and broad behavior outcomes relative to alternative outcomes (e.g., internalizing problems). Limitations and directions for future research are discussed, including recommendations for the collection of additional SRSS diagnostic accuracy evidence.


Journal of Applied School Psychology | 2018

Treatment Utility of Universal Screening for Behavioral Risk: A Manipulated Assessment Study

Crystal N. Taylor; Stephen P. Kilgus; Francis L. Huang

ABSTRACT In recent years, schools have started implementing preventive practices such as universal screening. Yet, researchers have not evaluated the extent to which universal screening contributes to academic and behavioral outcomes. The purpose of this study was to evaluate the treatment utility of universal screening for behavioral risk. Student participants were randomly assigned to two groups, corresponding to a different method of identification (office discipline referrals and universal screening). Participants identified as at risk, based on their office discipline referral data or universal screening data, and selected for intervention, received a Tier 2 intervention. Data were analyzed using a fixed-effects regression model. Analyses identified no statistically significantly differences between the two groups with regard to various academic and behavioral outcomes, suggesting universal screening did not contribute to changes in student functioning.


Behavioral Disorders | 2018

Development and Validation of a Parent Version of the Social, Academic, and Emotional Behavior Risk Screener in an Elementary Sample

Crystal N. Taylor; Amanda N. Allen; Stephen P. Kilgus; Nathaniel P. von der Embse; Andrew S. Garbacz

A line of research has supported the incremental and construct validity of multi-informant assessment. Accordingly, multiple universal screening systems have been designed to support the collection of information from multiple informants. The purpose of the study was to expand the Social, Academic, and Emotional Behavior Risk Screener (SAEBRS) suite of tools to include a novel parent form (SAEBRS-P). Specific aims of the study included the examination of SAEBRS-P factor structure, internal consistency reliability, and concurrent validity. Screening was conducted across four elementary schools in the Pacific Northwest with 212 students and their parents. Factor analytic results supported the retention of four factors, which demonstrated acceptable internal consistency reliability. Further analyses supported SAEBRS-P concurrent criterion-related validity, indicating moderate to high correlations between SAEBRS-P and Strengths and Difficulties Questionnaire (SDQ) scales. Limitations and implications for research are discussed.


Journal of Psychoeducational Assessment | 2017

Reliability and Relationship to Retention of Assessing an Acquisition Rate for Sight Words With Kindergarten Students

Crystal N. Taylor; Lisa Aguilar; Matthew K. Burns; June L. Preast; Kristy Warmbold-Brann

Teaching children too many words during a lesson reduces retention. The amount of new information a student can successfully rehearse and recall later is called acquisition rate (AR), which has been reliably measured with students in first, third, and fifth grades. The purpose of this study was to examine the reliability of assessing AR for sight words with kindergarten students. A total of 32 kindergarten students from five classrooms across two elementary schools participated in the study. AR was measured twice over a 2-week period, and 1-day retention was measured for the first AR. The AR data resulted in a 2-week delayed alternate form reliability of r = .83, and there was also a strong correlation between AR and number of words retained 1 day later. The limitations, implications, and considerations for the name of the construct being assessed are discussed.

Collaboration


Dive into the Crystal N. Taylor's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amanda N. Allen

University of Missouri–Kansas City

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kristy Warmbold-Brann

University of Missouri–Kansas City

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge