Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kristine Y. Hogarty is active.

Publication


Featured researches published by Kristine Y. Hogarty.


Educational and Psychological Measurement | 2005

The Quality of Factor Solutions in Exploratory Factor Analysis: The Influence of Sample Size, Communality, and Overdetermination.

Kristine Y. Hogarty; Constance V. Hines; Jeffrey D. Kromrey; John M. Ferron; Karen R. Mumford

The purpose of this studywas to investigate the relationship between sample size and the quality of factor solutions obtained from exploratory factor analysis. This research expanded upon the range of conditions previously examined, employing a broad selection of criteria for the evaluation of the quality of sample factor solutions. Results showed that when communalities are high, sample size tended to have less influence on the quality of factor solutions than when communalities are low. Overdetermination of factors was also shown to improve the factor analysis solution. Finally, decisions about the quality of the factor solution depended upon which criteria were examined.


Review of Educational Research | 2009

Multilevel Modeling: A Review of Methodological Issues and Applications

Robert F. Dedrick; John M. Ferron; Melinda R. Hess; Kristine Y. Hogarty; Jeffrey D. Kromrey; Thomas R. Lang; John D. Niles; Reginald S. Lee

This study analyzed the reporting of multilevel modeling applications of a sample of 99 articles from 13 peer-reviewed journals in education and the social sciences. A checklist, derived from the methodological literature on multilevel modeling and focusing on the issues of model development and specification, data considerations, estimation, and inference, was used to analyze the articles. The most common applications were two-level models where individuals were nested within contexts. Most studies were non-experimental and used nonprobability samples. The amount of data at each level varied widely across studies, as did the number of models examined. Analyses of reporting practices indicated some clear problems, with many articles not reporting enough information for a reader to critique the reported analyses. For example, in many articles, one could not determine how many models were estimated, what covariance structure was assumed, what type of centering if any was used, whether the data were consistent with assumptions, whether outliers were present, or how the models were estimated. Guidelines for researchers reporting multilevel analyses are provided.


Educational and Psychological Measurement | 2003

Another Look at Technology Use in Classrooms: The Development and Validation of an Instrument To Measure Teachers' Perceptions.

Kristine Y. Hogarty; Thomas R. Lang; Jeffrey D. Kromrey

This article describes the development and initial validation of scores from a survey designed to measure teachers’ reported use of technology in their classrooms. Based on data obtained from a sample of approximately 2,000 practicing teachers, factor analytic and correlational methods were used to obtain evidence of the validity of scores derived from responses to the instrument. In addition, analyses of Web and paper versions of the survey suggest relatively minor differences in responses, although the response rates for the paper version were substantially higher. The results were interpreted in terms of the utility of the instrument for measuring the confluence of factors that are critical for inquiry related to technology use in classrooms.


Psychometrika | 2004

Selection of Variables in Exploratory Factor Analysis: An Empirical Comparison of a Stepwise and Traditional Approach.

Kristine Y. Hogarty; Jeffrey D. Kromrey; John M. Ferron; Constance V. Hines

The purpose of this study was to investigate and compare the performance of a stepwise variable selection algorithm to traditional exploratory factor analysis. The Monte Carlo study included six factors in the design; the number of common factors; the number of variables explained by the common factors; the magnitude of factor loadings; the number of variables not explained by the common factors; the type of anomaly evidenced by the poorly explained variables; and sample size. The performance of the methods was evaluated in terms of selection and pattern accuracy, and bias and root mean squared error of the structure coefficients. Results indicate that the stepwise algorithm was generally ineffective at excluding anomalous variables from the factor model. The poor selection accuracy of the stepwise approach suggests that it should be avoided.


Behavior Research Methods Instruments & Computers | 2003

RETR_PWR: An SAS macro for retrospective statistical power analysis

Kristine Y. Hogarty; Jeffrey D. Kromrey

In contrast to prospective power analysis, retrospective power analysis provides an estimate of the statistical power of a hypothesis test after an investigation has been conducted rather than before. In this article, three approaches to obtaining point estimates of power and an interval estimation algorithm are delineated. Previous research on the bias and sampling error of these estimates is briefly reviewed. Finally, an SAS macro that calculates the point and interval estimates is described. The macro was developed to estimate the power of anF test (obtained from analysis of variance, multiple regression analysis, or any of several multivariate analyses), but it may be easily adapted for use with other statistics, such as chi-square tests ort tests.


Journal of research on computing in education | 1999

An Examination of the Relationships between Student Conduct and the Number of Computers per Student in Florida Schools.

Ann E. Barron; Kristine Y. Hogarty; Jeffrey D. Kromery; Peter Lenkway

AbstractThe relationship between the numbers of computers in schools and student conduct was investigated using school-level data reported to the Department of Education by all Florida school districts for the 1993–1994, 1994–1995, and 1995–1996 school years. Computer use was defined as the total number of computers used for instruction, and student conduct was defined as the number of conduct violations and number of disciplinary actions taken. In addition, school attendance and staff turnover rates were analyzed. Results from the research among Florida schools reporting consistent increased use of computers in instruction revealed: (a) Elementary schools witnessed fewer conduct violations (effect sizes ranged from −0.67 to 0.04) and disciplinary actions (effect sizes ranged from −0.13 to −0.10), (b) middle/junior high schools experienced fewer conduct violations (effect sizes ranged from −0.35 to −0.14) and disciplinary actions (−0.21 to −0.18), and (c) high schools experienced fewer crimes against prop...


Educational and Psychological Measurement | 2007

Interval Estimates of Multivariate Effect Sizes Coverage and Interval Width Estimates Under Variance Heterogeneity and Nonnormality

Melinda R. Hess; Kristine Y. Hogarty; John M. Ferron; Jeffrey D. Kromrey

Monte Carlo methods were used to examine techniques for constructing confidence intervals around multivariate effect sizes. Using interval inversion and bootstrapping methods, confidence intervals were constructed around the standard estimate of Mahalanobis distance (D 2), two bias-adjusted estimates of D 2, and Huberty’s I. Interval coverage and width were examined across conditions by adjusting sample size, number of variables, population effect size, population distribution shape, and the covariance structure. The accuracy and precision of the intervals varied considerably across methods and conditions; however, the interval inversion approach appears to be promising for D 2, whereas the percentile bootstrap approach is recommended for the other effect size measures. The results imply that it is possible to obtain fairly accurate coverage estimates for multivariate effect sizes. However, interval width estimates tended to be large and uninformative, suggesting that future efforts might focus on investigating design factors that facilitate more precise estimates of multivariate effect sizes.


Journal of Computing in Higher Education | 2008

Laptop Computers in Teacher Preparation: Lessons Learned from the University of South Florida Implementation

Ann E. Barron; Carine M. Feyten; Melissa Venable; Amy Hilbelink; Kristine Y. Hogarty; Jeffrey D. Kromrey; Thomas R. Lang

THIS ARTICLE provides an overview of the successful laptop implementation in the College of Education at the University of South Florida (USF). The pilot initiative began with one cohort of preservice teachers in 2003; since then, the program has expanded throughout the college. Through a chronological outline of the issues, formative evaluations, modifications, and expansion of the project as it progressed through the years, this article shares lessons learned related to the process and outcomes. For example, initial implementation decisions included issues such as whether participation should be voluntary or mandatory, which computer platforms would be supported, and how training and support would be provided. As the project expanded, questions related to ongoing maintenance, financial aid, and other issues were addressed.


Action in teacher education | 2017

Elementary Preservice Teacher Field Supervision: A Survey of Teacher Education Programs

Jennifer Jacobs; Kristine Y. Hogarty; Rebecca West Burns

ABSTRACT There is a heightened focus within teacher education to centralize clinical experiences and develop strong partnerships between schools and universities. University field supervisors fulfill a critical role within clinical experiences because they are uniquely situated in spaces where they can help preservice teachers and school-based partners integrate theory and practice. However, historically, field supervision has been devalued within teacher education programs. The purpose of this research was to gain insight into how various teacher education programs are actualizing field experiences and specifically field supervision within this time of reform in teacher education. This survey explored the question, “What is the state of preservice teacher field supervision within elementary teacher education programs?” Findings from this study suggest teacher education programs surveyed are positioned to respond to reforms in the areas of enacting multiple field experiences as well as conceptualizing field supervision beyond observation and feedback. Findings can help teacher education programs begin to develop common understandings, goals, and nomenclature about actualizing reforms for clinically-rich teacher education.


Review of Educational Research | 2002

Editors’ Introduction: Special Issue on Standards-Based Reforms and Accountability

Kathryn M. Borman; Jeffrey D. Kromrey; Constance V. Hines; Kristine Y. Hogarty

In this special issue of Review of Educational Research, we present a collection of articles that cohere around the common theme of standards-based reform and accountability. The authors critically examine the extant literature in a time of increasing polarization of views about the desirability and consequences of the accountability movement and of educational reform in general. In the first article, Madhabi Chatterji focuses our attention on purposes, models, and methods of inquiry. This article classifies the literature into seven categories and analyzes it with respect to content and methodology, using theoretically supported taxonomies. With the premise that past reforms were meant to be systemic, Chatterji examines the extent to which related inquiry has been guided by designs that explicitly or implicitly acknowledge the presence of a system, and evaluates the utility of designs in support of large-scale systematic changes in education. In the second article, James P. Spillane, Brian J. Reiser, and Todd Reimer develop a cognitive framework to characterize sense-making in the implementation process. Their framework is especially relevant for recent education policy initiatives, such as standards-based reform, that press for tremendous changes in classroom instruction. The framework is premised on the assumption that if teachers and school administrators are to implement state and national standards, they must first interpret them. Their understanding of reform proposals is the basis on which they decide whether to ignore, sabotage, adapt, or adopt them. The authors argue that behavioral change includes a fundamentally cognitive component. They draw on both theoretical and empirical literatures to develop a cognitive perspective on implementation. In the third article, Laura Desimone reviews, critiques, and synthesizes research that documents the implementation of comprehensive school reform models. She applies the theory that the more specific, consistent, authoritative, powerful, and stable a policy is, the stronger will be its implementation. The research suggests that all five of these policy attributes contribute to implementation and that, in addition, each is related to particular aspects of the process: specificity to implementation fidelity, power to immediate implementation effects, and authority, consistency, and stability to long-lasting change. In the final article, Jian Wang and Sandra J. Odell draw on the mentoring and learning-to-teach literature, exploring how mentoring that supports novices’ learning to teach in ways consistent with the standards brings about reform, whether in preservice or induction contexts. The authors point out that the assumptions on which mentoring programs are based often do not focus on standards and that the prevailing mentoring practices are consistent with those assumptions rather than with the assumptions behind standards-based teaching. To promote mentoring that reforms teaching in ways consistent with the aims of the standards movement, policymakers must determine effective methods of educating mentors and mentoring program designers.

Collaboration


Dive into the Kristine Y. Hogarty's collaboration.

Top Co-Authors

Avatar

Jeffrey D. Kromrey

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Melinda R. Hess

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

John M. Ferron

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Thomas R. Lang

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Ann E. Barron

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Constance V. Hines

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Lou M. Carey

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Amy Hilbelink

University of South Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Melissa Venable

University of South Florida

View shared research outputs
Researchain Logo
Decentralizing Knowledge