Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alexander M. Schoemann is active.

Publication


Featured researches published by Alexander M. Schoemann.


Psychological Methods | 2013

Why the items versus parcels controversy needn't be one.

Todd D. Little; Mijke Rhemtulla; Kimberly Gibson; Alexander M. Schoemann

The use of item parcels has been a matter of debate since the earliest use of factor analysis and structural equation modeling. Here, we review the arguments that have been levied both for and against the use of parcels and discuss the relevance of these arguments in light of the building body of empirical evidence investigating their performance. We discuss the many advantages of parcels that some researchers find attractive and highlight, too, the potential problems that ill-informed use can incur. We argue that no absolute pro or con stance is warranted. Parcels are an analytic tool like any other. There are circumstances in which parceling is useful and those when it is not. We emphasize the precautions that should be taken when creating item parcels and interpreting model results based on parcels. Finally, we review and compare several proposed strategies for parcel building and suggest directions for further research.


Educational and Psychological Measurement | 2013

Orthogonalizing Through Residual Centering: Extended Applications and Caveats

G. John Geldhof; Sunthud Pornprasertmanit; Alexander M. Schoemann; Todd D. Little

Residual centering is a useful tool for orthogonalizing variables and latent constructs, yet it is underused in the literature. The purpose of this article is to encourage residual centering’s use by highlighting instances where it can be helpful: modeling higher order latent variable interactions, removing collinearity from latent constructs, creating phantom indicators for multiple group models, and controlling for covariates prior to latent variable analysis. Residual centering is not without its limitations, however, and the authors also discuss caveats to be mindful of when implementing this technique. They discuss the perils of double orthogonalization (i.e., simultaneously orthogonalizing A relative to B and B relative to the original A), the unintended consequences of orthogonalization on model fit, the removal of a mean structure, and the effects of nonnormal data on residual centering.


Educational Psychology | 2017

The influence of test-based accountability policies on teacher stress and instructional practices: a moderated mediation model

Nathaniel P. von der Embse; Alexander M. Schoemann; Stephen P. Kilgus; Maribeth Wicoff; Mark C. Bowler

Abstract The present study examined the use of student test performance for merit pay and teacher evaluation as predictive of both educator stress and counterproductive teaching practices, and the moderating role of perceived test value. Structural equation modelling of data from a sample of 7281 educators in a South-eastern state in the United States supported the hypothesis that educators who perceived the test as an invalid measure of teaching effectiveness were more likely to report high levels of test stress and to use counterproductive teaching practices, including fear appeals, in an attempt to motivate students for test-taking. This study provides initial evidence for the hypothesised relationships of test-based accountability policy with teacher mental health and instructional practices. Implications for research and practice are discussed.


International Journal of Behavioral Development | 2014

Planned missing data designs with small sample sizes: How small is too small?

Fan Jia; E. Whitney G. Moore; Richard Kinai; Kelly S. Crowe; Alexander M. Schoemann; Todd D. Little

Utilizing planned missing data (PMD) designs (ex. 3-form surveys) enables researchers to ask participants fewer questions during the data collection process. An important question, however, is just how few participants are needed to effectively employ planned missing data designs in research studies. This article explores this question by using simulated three-form planned missing data to assess analytic model convergence, parameter estimate bias, standard error bias, mean squared error (MSE), and relative efficiency (RE).Three models were examined: a one-time-point, cross-sectional model with 3 constructs; a two-time-point model with 3 constructs at each time point; and a three-time-point, mediation model with 3 constructs over three time points. Both full-information maximum likelihood (FIML) and multiple imputation (MI) were used to handle the missing data. Models were found to meet convergence rate and acceptable bias criteria with FIML at smaller sample sizes than with MI.


International Journal of Behavioral Development | 2014

Using Monte Carlo simulations to determine power and sample size for planned missing designs

Alexander M. Schoemann; Patrick Miller; Sunthud Pornprasertmanit; Wei Wu

Planned missing data designs allow researchers to increase the amount and quality of data collected in a single study. Unfortunately, the effect of planned missing data designs on power is not straightforward. Under certain conditions using a planned missing design will increase power, whereas in other situations using a planned missing design will decrease power. Thus, when designing a study utilizing planned missing data researchers need to perform a power analysis. In this article, we describe methods for power analysis and sample size determination for planned missing data designs using Monte Carlo simulations. We also describe a new, more efficient method of Monte Carlo power analysis, software that can be used in these approaches, and several examples of popular planned missing data designs.


International Journal of Behavioral Development | 2014

Optimal assignment methods in three-form planned missing data designs for longitudinal panel studies

Terrence D. Jorgensen; Mijke Rhemtulla; Alexander M. Schoemann; Brent McPherson; Wei Wu; Todd D. Little

Planned missing designs are becoming increasingly popular, but because there is no consensus on how to implement them in longitudinal research, we simulated longitudinal data to distinguish between strategies of assigning items to forms and of assigning forms to participants across measurement occasions. Using relative efficiency as the criterion, results indicate that balanced item assignment coupled with assigning different forms over time most often yields the optimal assignment method, but only if variables are reliable. We also address how practice effects can bias latent means. A second simulation demonstrates that (a) assigning different forms over time diminishes practice effects and (b) using planned-missing-data patterns as predictors of practice can remove bias altogether.


Journal of Social and Personal Relationships | 2012

Regrets, I’ve had a few Effects of dispositional and manipulated attachment on regret

Alexander M. Schoemann; Omri Gillath; Amanda K. Sesko

Despite extensive research on regret, relatively little is known about the underlying mechanisms of regret within close relationships. Attachment theory provides a theoretical framework to study regret within close relationships. Specifically, anxiously attached people tend to have negative views of themselves and their past acts, which is likely to result in a higher tendency to experience regret. Attachment security is likely to attenuate this tendency. Two studies provide support for these claims. Study 1 used a correlational design and showed that attachment anxiety is positively associated with the tendency to feel regret within close relationships but not with regrets in other domains. Study 2 used an experimental design and showed that reducing attachment anxiety via attachment security enhancement reduces the tendency to feel regret mainly for participants high in attachment anxiety.


Journal of Consulting and Clinical Psychology | 2017

Outcomes for adolescents who comply with long-term psychosocial treatment for ADHD.

Brandon K. Schultz; Steven W. Evans; Joshua M. Langberg; Alexander M. Schoemann

Objective: We conducted a large (N = 216) multisite clinical trial of the Challenging Horizons Program (CHP)—a yearlong afterschool program that provides academic and interpersonal skills training for adolescents with attention-deficit/hyperactivity disorder. Intent-to-treat analyses suggest that, as predicted, the CHP resulted in significant reductions in problem behaviors and academic impairment when compared to community care. However, attendance in the CHP ranged from zero to 60 sessions, raising questions about optimal dosing. Method: To evaluate the impact of treatment compliance, complier average causal effect modeling was used to compare participants who attended 80% or more of sessions to an estimate of outcomes for comparable control participants. Results: Treatment compliers exhibited medium to large benefits (ds = 0.56 to 2.00) in organization, disruptive behaviors, homework performance, and grades relative to comparable control estimates, with results persisting 6 months after treatment ended. However, compliance had little impact on social skills. Conclusions: Students most in need of treatment were most likely to comply, resulting in significant benefits in relation to comparable control participants who experienced deteriorating outcomes over time. Difficulties relating to dose-response estimation and the potentially confounding influence of treatment acceptability, accessibility, and client motivation are discussed.


School Psychology Quarterly | 2016

Reliability of Direct Behavior Ratings – Social Competence (DBR-SC) data: How many ratings are necessary?

Stephen P. Kilgus; T. Chris Riley-Tillman; Janine P. Stichter; Alexander M. Schoemann; Katie Bellesheim

The purpose of this investigation was to evaluate the reliability of Direct Behavior Ratings-Social Competence (DBR-SC) ratings. Participants included 60 students identified as possessing deficits in social competence, as well as their 23 classroom teachers. Teachers used DBR-SC to complete ratings of 5 student behaviors within the general education setting on a daily basis across approximately 5 months. During this time, each student was assigned to 1 of 2 intervention conditions, including the Social Competence Intervention-Adolescent (SCI-A) and a business-as-usual (BAU) intervention. Ratings were collected across 3 intervention phases, including pre-, mid-, and postintervention. Results suggested DBR-SC ratings were highly consistent across time within each student, with reliability coefficients predominantly falling in the .80 and .90 ranges. Findings further indicated such levels of reliability could be achieved with only a small number of ratings, with estimates varying between 2 and 10 data points. Group comparison analyses further suggested the reliability of DBR-SC ratings increased over time, such that student behavior became more consistent throughout the intervention period. Furthermore, analyses revealed that for 2 of the 5 DBR-SC behavior targets, the increase in reliability over time was moderated by intervention grouping, with students receiving SCI-A demonstrating greater increases in reliability relative to those in the BAU group. Limitations of the investigation as well as directions for future research are discussed herein. (PsycINFO Database Record


Behavior Modification | 2018

Exploring the Moderating Effects of Cognitive Abilities on Social Competence Intervention Outcomes

Janine P. Stichter; Melissa J. Herzog; Stephen P. Kilgus; Alexander M. Schoemann

Many populations served by special education, including those identified with autism, emotional impairments, or students identified as not ready to learn, experience social competence deficits. The Social Competence Intervention-Adolescents’ (SCI-A) methods, content, and materials were designed to be maximally pertinent and applicable to the social competence needs of early adolescents (i.e., age 11-14 years) identified as having scholastic potential but experiencing significant social competence deficits. Given the importance of establishing intervention efficacy, the current paper highlights the results from a four-year cluster randomized trial (CRT) to examine the efficacy of SCI-A (n = 146 students) relative to Business As Usual (n = 123 students) school-based programming. Educational personnel delivered all programming including both intervention and BAU conditions. Student functioning was assessed across multiple time points, including pre-, mid-, and post-intervention. Outcomes of interest included social competence behaviors, which were assessed via both systematic direct observation and teacher behavior rating scales. Data were analyzed using multilevel models, with students nested within schools. Results suggested after controlling for baseline behavior and student IQ, BAU and SCI students differed to a statistically significant degree across multiple indicators of social performance. Further consideration of standardized mean difference effect sizes revealed these between-group differences to be representative of medium effects (d > .50). Such outcomes pertained to student (a) awareness of social cues and information, and (b) capacity to appropriately interact with teachers and peers. The need for additional power and the investigation of potential moderators and mediators of social competence effectiveness are explored.

Collaboration


Dive into the Alexander M. Schoemann's collaboration.

Top Co-Authors

Avatar

Todd D. Little

University of North Texas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fan Jia

University of Kansas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wei Wu

University of Kansas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Patrick Miller

University of Notre Dame

View shared research outputs
Researchain Logo
Decentralizing Knowledge