Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Thomas A. Schmitt is active.

Publication


Featured researches published by Thomas A. Schmitt.


Journal of Psychoeducational Assessment | 2011

Current Methodological Considerations in Exploratory and Confirmatory Factor Analysis

Thomas A. Schmitt

Researchers must make numerous choices when conducting factor analyses, each of which can have significant ramifications on the model results. They must decide on an appropriate sample size to achieve accurate parameter estimates and adequate power, a factor model and estimation method, a method for determining the number of factors and evaluating model fit, and a rotation criterion. Unfortunately, researchers continue to use outdated methods in each of these areas. The present article provides a current overview of these areas in an effort to provide researchers with up-to-date methods and considerations in both exploratory and confirmatory factor analysis. A demonstration was provided to illustrate current approaches. Choosing between confirmatory and exploratory methods is also discussed, as researchers often make incorrect assumptions about the application of each.


Multivariate Behavioral Research | 2010

A Comparative Investigation of Rotation Criteria Within Exploratory Factor Analysis

Daniel A. Sass; Thomas A. Schmitt

Exploratory factor analysis (EFA) is a commonly used statistical technique for examining the relationships between variables (e.g., items) and the factors (e.g., latent traits) they depict. There are several decisions that must be made when using EFA, with one of the more important being choice of the rotation criterion. This selection can be arduous given the numerous rotation criteria available and the lack of research/literature that compares their function and utility. Historically, researchers have chosen rotation criteria based on whether or not factors are correlated and have failed to consider other important aspects of their data. This study reviews several rotation criteria, demonstrates how they may perform with different factor pattern structures, and highlights for researchers subtle but important differences between each rotation criterion. The choice of rotation criterion is critical to ensure researchers make informed decisions as to when different rotation criteria may or may not be appropriate. The results suggest that depending on the rotation criterion selected and the complexity of the factor pattern matrix, the interpretation of the interfactor correlations and factor pattern loadings can vary substantially. Implications and future directions are discussed.


Educational and Psychological Measurement | 2011

Rotation Criteria and Hypothesis Testing for Exploratory Factor Analysis: Implications for Factor Pattern Loadings and Interfactor Correlations

Thomas A. Schmitt; Daniel A. Sass

Exploratory factor analysis (EFA) has long been used in the social sciences to depict the relationships between variables/items and latent traits. Researchers face many choices when using EFA, including the choice of rotation criterion, which can be difficult given that few research articles have discussed and/or demonstrated their differences. The goal of the current study is to help fill this gap by reviewing and demonstrating the utility of several rotation criteria. Furthermore, this article discusses and demonstrates the importance of using factor pattern loading standard errors for hypothesis testing. The choice of a rotation criterion and the use of standard errors in evaluating factor loadings are essential so researchers can make informed decisions concerning the factor structure. This study demonstrates that depending on the rotation criterion selected, and the complexity of the factor pattern matrix, the interfactor correlations and factor pattern loadings can vary substantially. It is also illustrated that the magnitude of the factor loading standard errors can result in different factor structures. Implications and future directions are discussed.


Structural Equation Modeling | 2014

Evaluating Model Fit With Ordered Categorical Data Within a Measurement Invariance Framework: A Comparison of Estimators

Daniel A. Sass; Thomas A. Schmitt; Herbert W. Marsh

A paucity of research has compared estimation methods within a measurement invariance (MI) framework and determined if research conclusions using normal-theory maximum likelihood (ML) generalizes to the robust ML (MLR) and weighted least squares means and variance adjusted (WLSMV) estimators. Using ordered categorical data, this simulation study aimed to address these queries by investigating 342 conditions. When testing for metric and scalar invariance, Δχ2 results revealed that Type I error rates varied across estimators (ML, MLR, and WLSMV) with symmetric and asymmetric data. The Δχ2 power varied substantially based on the estimator selected, type of noninvariant indicator, number of noninvariant indicators, and sample size. Although some the changes in approximate fit indexes (ΔAFI) are relatively sample size independent, researchers who use the ΔAFI with WLSMV should use caution, as these statistics do not perform well with misspecified models. As a supplemental analysis, our results evaluate and suggest cutoff values based on previous research.


Journal of Personality Disorders | 2011

Self-RepoRt Methodology iS inSufficient foR iMpRoving the ASSeSSMent And clASSificAtion of AxiS ii peRSonAlity diSoRdeRS

Steven K. Huprich; Robert F. Bornstein; Thomas A. Schmitt

Current approaches to the assessment and classification of personality disorders (PDs) rely almost exclusively on self-report methodology. In this paper, we document the many difficulties with self-reports, including limitations in their accuracy, the confounding effect of mood state, and problems with the selection and retention of factors in factor analytic approaches to self-report questionnaires. We also discuss the role of implicit processes in self-reports, with special attention directed to the phenomenon of priming and its effect on outcome. To rectify these issues, we suggest a transtheoretical, multimethod, multimodal approach to personality pathology assessment and diagnosis, which utilizes the richness of prototypes and empirical findings on PD categories and pathologies.


Personality Disorders: Theory, Research, and Treatment | 2010

Comparing factor analytic models of the DSM-IV personality disorders.

Steven K. Huprich; Thomas A. Schmitt; David C. S. Richard; Iwona Chelminski; Mark Zimmerman

There is little agreement about the latent factor structure of the Diagnostic and Statistical Manual of Mental Disorders (DSM) personality disorders (PDs). Factor analytic studies over the past 2 decades have yielded different results, in part reflecting differences in factor analytic technique, the measure used to assess the PDs, and the changing DSM criteria. In this study, we explore the latent factor structure of the DSM (4th ed.; IV) PDs in a sample of 1200 psychiatric outpatients evaluated with the Structured Interview for DSM-IV PDs (B. Pfohl, N. Blum, & M. Zimmerman, 1997). We first evaluated 2 a priori models of the PDs with confirmatory factor analysis (CFA), reflecting their inherent organization in the DSM-IV: a 3-factor model and a 10-factor model. Fit statistics did not suggest that these models yielded an adequate fit. We then evaluated the latent structure with exploratory factor analysis (EFA). Multiple solutions produced more statistically and theoretically reasonable results, as well as providing clinically useful findings. On the basis of fit statistics and theory, 3 models were evaluated further--the 4-, 5-, and 10-factor models. The 10-factor model, which did not resemble the 10-factor model of the CFA, was determined to be the strongest of all 3 models. Future research should use contemporary methods of evaluating factor analytic results in order to more thoroughly compare various factor solutions.


Archive | 2013

Testing measurement and structural invariance: Implications for practice

Daniel A. Sass; Thomas A. Schmitt

As part of their research activities, researchers in all areas of education develop measuring instruments, design and conduct experiments and surveys, and analyze data resulting from these activities. Educational research has a strong tradition of employing state-of-the-art statistical and psychometric (psychological measurement) techniques. Commonly referred to as quantitative methods, these techniques cover a range of statistical tests and tools. Quantitative research is essentially about collecting numerical data to explain a particular phenomenon of interest. Over the years, many methods and models have been developed to address the increasingly complex issues that educational researchers seek to address. This handbook serves to act as a reference for educational researchers and practitioners who desire to acquire knowledge and skills in quantitative methods for data analysis or to obtain deeper insights from published works. Written by experienced researchers and educators, each chapter in this handbook covers a methodological topic with attention paid to the theory, procedures, and the challenges on the use of that particular methodology. It is hoped that readers will come away from each chapter with a greater understanding of the methodology being addressed as well as an understanding of the directions for future developments within that methodological area.


Applied Measurement in Education | 2008

Estimating Non-Normal Latent Trait Distributions within Item Response Theory Using True and Estimated Item Parameters

Daniel A. Sass; Thomas A. Schmitt; Cindy M. Walker

Item response theory (IRT) procedures have been used extensively to study normal latent trait distributions and have been shown to perform well; however, less is known concerning the performance of IRT with non-normal latent trait distributions. This study investigated the degree of latent trait estimation error under normal and non-normal conditions using four latent trait estimation procedures and also evaluated whether the test composition, in terms of item difficulty level, reduces estimation error. Most importantly, both true and estimated item parameters were examined to disentangle the effects of latent trait estimation error from item parameter estimation error. Results revealed that non-normal latent trait distributions produced a considerably larger degree of latent trait estimation error than normal data. Estimated item parameters tended to have comparable precision to true item parameters, thus suggesting that increased latent trait estimation error results from latent trait estimation rather than item parameter estimation.


Residential Treatment for Children & Youth | 2012

Implementing Trauma-Informed Treatment for Youth in a Residential Facility: First-Year Outcomes

Ricky Greenwald; Lynn Siradas; Thomas A. Schmitt; Summar Reslan; Julia Fierle; Brad Sande

Training in the Fairy Tale model of trauma-informed treatment was provided to clinical and direct care staff working with 53 youth in a residential treatment facility. Compared to the year prior to training, in the year of the training the average improvement in presenting problems was increased by 34%, time to discharge was reduced by 39%, and rate of discharge to lower level of care was doubled. The inclusion of numerous interventions, along with limitations in implementation and evaluation, make it difficult to precisely identify the cause(s) of the improvement.


Journal of Child & Adolescent Trauma | 2010

Traumatic Incident Reduction for Urban At-Risk Youth and Unaccompanied Minor Refugees: Two Open Trials

Teresa Descilo; Richard Greenwald; Thomas A. Schmitt; Summar Reslan

Traumatic incident reduction (TIR) is a trauma resolution method that appears to be well tolerated and has yielded relatively rapid benefit in two adult treatment studies. This article reports on two open trials using TIR with 33 urban at-risk youth and 31 unaccompanied refugee minors. In both studies, participants consistently responded positively. In the second study, nearly all participants who began treatment with post-traumatic stress disorder ended without it, with an average of at least one significant trauma memory being treated per session. TIR’s apparent efficiency and effectiveness in these preliminary studies indicates its promise in child and adolescent treatment.

Collaboration


Dive into the Thomas A. Schmitt's collaboration.

Top Co-Authors

Avatar

Daniel A. Sass

University of Texas at San Antonio

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cindy M. Walker

University of Wisconsin–Milwaukee

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Summar Reslan

Eastern Michigan University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Angela Lukomski

Eastern Michigan University

View shared research outputs
Top Co-Authors

Avatar

Barbara Moir

Henry Ford Health System

View shared research outputs
Researchain Logo
Decentralizing Knowledge