Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brian Daniels is active.

Publication


Featured researches published by Brian Daniels.


School Psychology Quarterly | 2015

An Evaluation of Observational Methods for Measuring Response to Classwide Intervention.

Amy M. Briesch; Elizabeth M. Hemphill; Robert J. Volpe; Brian Daniels

Although there is much research to support the effectiveness of classwide interventions aimed at improving student engagement, there is also a great deal of variability in terms of how response to group-level intervention has been measured. The unfortunate consequence of this procedural variability is that it is difficult to determine whether differences in obtained results across studies are attributable to the way in which behavior was measured or actual intervention effectiveness. The purpose of this study was to comparatively evaluate the most commonly used observational methods for monitoring the effects of classwide interventions in terms of the degree to which obtained data represented actual behavior. The 5 most common sampling methods were identified and evaluated against a criterion generated by averaging across observations conducted on 14 students in one seventh-grade classroom. Results suggested that the best approximation of mean student engagement was obtained by observing a different student during each consecutive 15-s interval whereas observing an entire group of students during each interval underestimated the mean level of behavior within a phase and the degree of behavior change across phases. In contrast, when observations were restricted to the 3 students with the lowest levels of engagement, data revealed greater variability in engagement across baseline sessions and suggested a more notable change in student behavior subsequent to intervention implementation.


School Psychology Quarterly | 2012

The influence of observation length on the dependability of data.

Tyler David Ferguson; Amy M. Briesch; Robert J. Volpe; Brian Daniels

Although direct observation is one of the most frequently used assessment methods by school psychologists, studies have shown that the number of observations needed to obtain a dependable estimate of student behavior may be impractical. Because direct observation may be used to inform important decisions about students, it is crucial that data be reliable. Preliminary research has suggested that dependability may be improved by extending the length of individual observations. The purpose of the current study was, therefore, to examine how changes in observational duration affect the dependability of student engagement data. Twenty seventh grade students were each observed for 30-min across 2 days during math instruction. Generalizability theory was then used to calculate reliability-like coefficients for the purposes of intraindividual decision making. Across days, acceptable levels of dependability for progress monitoring (i.e., .70) were achieved through two 30-min observations, three 15-min observations, or four to five 10-min observations. Acceptable levels of dependability for higher stakes decisions (i.e., .80) required over an hour of cumulative observation time. Within a given day, a 15 minute observation was found to be adequate for making low-stakes decisions whereas an hour long observation was necessary for high-stakes decision making. Limitations of the current study and implications for research and practice are discussed.


School Psychology Quarterly | 2014

Development of a Problem-Focused Behavioral Screener Linked to Evidence-Based Intervention

Brian Daniels; Robert J. Volpe; Amy M. Briesch; Gregory A. Fabiano

This study examines the factor structure, reliability and validity of a novel school-based screening instrument for academic and disruptive behavior problems commonly experienced by children and adolescents with attention deficit hyperactivity disorder (ADHD). Participants included 39 classroom teachers from two public school districts in the northeastern United States. Teacher ratings were obtained for 390 students in grades K-6. Exploratory factor analysis supports a two-factor structure (oppositional/disruptive and academic productivity/disorganization). Data from the screening instrument demonstrate favorable internal consistency, temporal stability and convergent validity. The novel measure should facilitate classroom intervention for problem behaviors associated with ADHD by identifying at-risk students and determining specific targets for daily behavior report card interventions.


School Psychology Quarterly | 2017

Classification Accuracy and Acceptability of the Integrated Screening and Intervention System Teacher Rating Form.

Brian Daniels; Robert J. Volpe; Gregory A. Fabiano; Amy M. Briesch

This study examines the classification accuracy and teacher acceptability of a problem-focused screener for academic and disruptive behavior problems, which is directly linked to evidence-based intervention. Participants included 39 classroom teachers from 2 public school districts in the Northeastern United States. Teacher ratings were obtained for 390 students in Grades K–6. Data from the screening instrument demonstrate favorable classification accuracy, and teacher ratings of feasibility and acceptability support the use of the measure for universal screening in elementary school settings. Results indicate the novel measure should facilitate classroom intervention for problem behaviors by identifying at-risk students and informing targets for daily behavior report card interventions.


Assessment for Effective Intervention | 2017

Dependability and Treatment Sensitivity of Multi-Item Direct Behavior Rating Scales for Interpersonal Peer Conflict.

Brian Daniels; Robert J. Volpe; Amy M. Briesch; Kenneth D. Gadow

Direct behavior rating (DBR) represents a feasible method for monitoring student behavior in the classroom; however, limited work to date has focused on the use of multi-item scales. The purposes of the study were to examine the (a) dependability of data obtained from a multi-item DBR designed to assess peer conflict and (b) treatment sensitivity of Direct Behavior Rating Multi-Item Scales (DBR-MIS) constructed using factor-derived and individualized methods. Analyses were performed using teacher ratings of 65 students (53 boys, 12 girls) between 6 and 12 years old. Results of decision studies indicated that an acceptable criterion of dependability (ϕ > .70) for low-stakes, intraindividual decision making could be achieved using a three-item scale across eight occasions, a four- or five-item scale across four occasions, or a six-item scale across three occasions. Subsequent analyses verified that a six-item DBR demonstrated acceptable treatment sensitivity when ratings were conducted on 3 days during baseline and 3 days during treatment with methylphenidate. Implications for practice and future research are discussed.


School Psychology Review | 2017

Examining the Influence of Interval Length on the Dependability of Observational Estimates

Amy M. Briesch; Tyler David Ferguson; Brian Daniels; Robert J. Volpe; Adam B. Feinberg

Abstract Systematic direct observation is a tool commonly employed by school psychologists to investigate student behavior. As these data are used for educational decision-making, ensuring the psychometric adequacy of the obtained data is an important consideration. Given that procedural aspects of systematic direct observation have been shown to influence the psychometric properties of obtained data, this study was designed to explore how interval length influences the dependability of academic engagement data when using a momentary time sampling procedure. Twenty seventh-grade students were each observed for two 15-min sessions during math instruction. A series of generalizability studies were conducted to examine how manipulations to interval length influenced reliability-like coefficients. In general, shorter interval lengths (i.e., 10 s, 15 s) were shown to produce higher levels of dependability. For example, an acceptable level of dependability (i.e., ϕ = .70) required twice as many 30-min observations when utilizing 20- or 30-s sampling as were required when utilizing 10- or 15-s sampling. Furthermore, whereas an acceptable level of dependability (i.e., ϕ = .70) could not be obtained using any interval length when conducting a single observation, this criterion was met using either 10- or 15-s sampling across two 30-min observations.


European Journal of Psychological Assessment | 2017

Measurement Invariance of a Universal Behavioral Screener Across Samples From the USA and Germany

Gino Casale; Robert J. Volpe; Brian Daniels; Thomas Hennemann; Amy M. Briesch; Michael Grosche

The current study examines the item and scalar equivalence of an abbreviated school-based universal screener that was cross-culturally translated and adapted from English into German. The instrument was designed to assess student behavior problems that impact classroom learning. Participants were 1,346 K-6 grade students from the US (n = 390, Mage = 9.23, 38.5% female) and Germany (n = 956, Mage = 8.04, 40.1% female). Measurement invariance was tested by multigroup confirmatory factor analysis (CFA) across students from the US and Germany. Results support full scalar invariance between students from the US and Germany (df = 266, &khgr;2 = 790.141, &Dgr;&khgr;2 = 6.9, p < .001, CFI = 0.976, &Dgr;CFI = 0.000, RMSEA = 0.052, &Dgr;RMSEA = −0.003) indicating that the factor structure, the factor loadings, and the item thresholds are comparable across samples. This finding implies that a full cross-cultural comparison including latent factor means and structural coefficients between the US and the German version of the abbreviated screener is possible. Therefore, the tool can be used in German schools as well as for cross-cultural research purposes between the US and Germany.


Psychology in the Schools | 2013

Using Self-Management Interventions to Address General Education Behavioral Needs: Assessment of Effectiveness and Feasibility.

Amy M. Briesch; Brian Daniels


School Psychology Forum | 2013

Check Your SLANT: Adapting Self-Management for Use as a Class-Wide Intervention.

Amy M. Briesch; Elizabeth M. Hemphill; Brian Daniels


PsycTESTS Dataset | 2018

Integrated Teacher Report Form--Abbreviated Version; German Version

Gino Casale; Robert J. Volpe; Brian Daniels; Thomas Hennemann; Amy M. Briesch; Michael Grosche

Collaboration


Dive into the Brian Daniels's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gino Casale

University of Paderborn

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge