Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Randy G. Floyd is active.

Publication


Featured researches published by Randy G. Floyd.


School Psychology Quarterly | 2008

Effects of General and Broad Cognitive Abilities on Mathematics Achievement.

Gordon E. Taub; Timothy Z. Keith; Randy G. Floyd; Kevin S. McGrew

This study investigated the direct and indirect effects of general intelligence and 7 broad cognitive abilities on mathematics achievement. Structural equation modeling was used to investigate the simultaneous effects of both general and broad cognitive abilities on students’ mathematics achievement. A hierarchical model of intelligence derived from the Cattell–Horn–Carroll (CHC) taxonomy of intelligence was used for all analyses. The participants consisted of 4 age-differentiated subsamples (ranging from ages 5 to 19) from the standardization sample of the Woodcock–Johnson III (WJ III; Woodcock, McGrew, & Mather, 2001). Data from each of the 4 age-differentiated subsamples were divided into 2 data sets. At each age level, one data set was used for model testing and modification, and a second data set was used for model validation. The following CHC broad cognitive ability factors demonstrated statistically significant direct effects on the mathematics achievement variables: Fluid Reasoning, Crystallized Intelligence, and Processing Speed. In contrast, across all age levels, the general intelligence factor demonstrated indirect effects on the mathematics achievement variable.


School Psychology Quarterly | 2007

Cattell-Horn-Carroll Cognitive Abilities and Their Effects on Reading Decoding Skills: g Has Indirect Effects, More Specific Abilities Have Direct Effects.

Randy G. Floyd; Timothy Z. Keith; Gordon E. Taub; Kevin S. McGrew

This study employed structural equation modeling to examine the effects of Cattell–Horn–Carroll (CHC) abilities on reading decoding skills using five age-differentiated subsamples from the standardization sample of the Woodcock–Johnson III (Woodcock, McGrew, & Mather, 2001). Using the Spearman Model including only g, strong direct effects of g on reading decoding skills were demonstrated at all ages. Using the Two-Stratum Model including g and broad abilities, direct effects of the broad abilities Long-Term Storage and Retrieval, Processing Speed, Crystallized Intelligence, Short-Term Memory, and Auditory Processing on reading decoding skills were demonstrated at select ages. Using the Three-Stratum Model including g, broad abilities, and narrow abilities, direct effects of the broad ability Processing Speed and the narrow abilities Associative Memory, Listening Ability, General Information, Memory Span, and Phonetic Coding were demonstrated at select ages. Across both the Two-Stratum Model and the Three-Stratum Model at all ages, g had very large but indirect effects. The findings suggest that school psychologists should interpret measures of some specific cognitive abilities when conducting psychoeducational assessments designed to explain reading decoding skills.


Journal of School Psychology | 2011

An overview and analysis of journal operations, journal publication patterns, and journal impact in school psychology and related fields

Randy G. Floyd; Kathryn M. Cooley; James E. Arnett; Thomas K. Fagan; Sterett H. Mercer; Christine Hingle

This article describes the results of three studies designed to understand better the journal operations, publishing practices, and impact of school psychology journals in recent years. The first study presents the results of a survey focusing on journal operations and peer-review practices that was completed by 61 journal editors of school psychology and aligned journals. The second study presents the results of review and classification of all articles appearing in one volume year for nine school psychology journals (i.e., The California School Psychologist, Canadian Journal of School Psychology, Journal of Applied School Psychology, Journal of School Psychology, Psychology in the Schools, School Psychology Forum, School Psychology International, School Psychology Quarterly, and School Psychology Review). The third study employed multilevel modeling to investigate differences in the longitudinal trends of impact factor data for five school psychology journals listed in the Web of Science (i.e., Journal of School Psychology, Psychology in the Schools, School Psychology International, School Psychology Quarterly, and School Psychology Review). The article addresses implications for authors, editors, and journal editorial teams as well as the status and impact of school psychology journals.


Professional Psychology: Research and Practice | 2008

The Exchangeability of IQs: Implications for Professional Psychology

Randy G. Floyd; M. H. Clark; William R. Shadish

IQs are important measures in the practice of psychology. Psychologists may frequently expect that IQs from different test batteries are reasonably exchangeable as measures of general intelligence. Results presented in this article provide evidence that different test batteries produce less similar IQs for samples of school-age children and undergraduate students than may have been expected. In fact, psychologists can anticipate that 1 in 4 individuals taking an intelligence test battery will receive an IQ more than 10 points higher or lower when taking another battery. Resulting suggestions for practice include carefully choosing batteries that provide representative sampling of specific abilities, differential weighting, or both; attending to unreliability in measurement; closely monitoring behaviors that undermine assessment of general intelligence; and considering the benefits of obtaining multiple IQs when such scores are used to make high-stakes diagnostic or eligibility decisions.


Journal of School Psychology | 2011

Strategies and attributes of highly productive scholars and contributors to the school psychology literature: Recommendations for increasing scholarly productivity

Rebecca S. Martínez; Randy G. Floyd; Luke W. Erichsen

In all academic fields, there are scholars who contribute to the research literature at exceptionally high levels. The goal of the current study was to discover what school psychology researchers with remarkably high levels of journal publication do to be so productive. In Study 1, 94 highly productive school psychology scholars were identified from past research, and 51 (39 men, 12 women) submitted individual, short-answer responses to a 5-item questionnaire regarding their research strategies. A constant comparative approach was employed to sort and code individual sentiments (N=479) into categories. Seven broad categories of counsel for increasing productivity emerged: (a) research and publication practices and strategies, (b) collaboration, mentoring and building relationships, (c) navigating the peer-review process, (d) strategies to bolster writing productivity and excellence, (e) personal character traits that foster productivity, (f) preparation before entering the professoriate, and (g) other noteworthy sentiments. Results are discussed in terms of nine recommendations for scholars and graduate students who wish to increase their productivity. In Study 2, five of the most productive scholars (1 woman, 4 men) participated in a semi-structured interview about their high levels of productivity. Interviews were recorded, transcribed, and analyzed, and a case analysis approach employed to profile each scholar. Study limitations and suggestions for future research are discussed.


Journal of Psychoeducational Assessment | 2003

Behavior Rating Scales for Assessment of Emotional Disturbance: A Critical Review of Measurement Characteristics

Randy G. Floyd; Jillayne E. Bose

This review provides an overview and critique of the design characteristics, technical properties, and validity evidence of behavior ratings scales focusing on measurement of the characteristics of emotional disturbance. Manuals and published research supporting nine parent and teacher rating scales were reviewed. These rating scales included the Behavior Evaluation Scale-2: Home Version (McCarney, 1994a); the Behavior Evaluation Scale-2: School Version (McCarney, 1994b); the Behavior Disorders Identification Scale, Second Edition: Home Version (McCarney & Arthaud, 2000a); the Behavior Disorders Identification Scale, Second Edition: School Version (McCarney & Arthaud, 2000b); the Devereux Behavior Rating Scale-School Form (Naglieri, LeBuffe, & Pfieffer, 1993); the Emotional and Behavior Problem Scale, Second Edition: Home Version (McCarney & Arthaud, 2001a); the Emotional and Behavior Problem Scale, Second Edition: School Version (McCarney & Arthaud, 2001b); the Scale for Assessing Emotional Disturbance (Epstein & Cullinan, 1998); and the Social-Emotional Dimension Scale (Hutton & Roberts, 1986). All instruments demonstrated several limitations in technical adequacy and were supported by generally incomplete and weak collections of validity evidence that limit their usefulness during the assessment of behaviors associated with emotional disturbance.


International journal of school and educational psychology | 2016

Classification agreement analysis of Cross-Battery Assessment in the identification of specific learning disorders in children and youth

John H. Kranzler; Randy G. Floyd; Nicholas Benson; Brian Zaboski; Lia Thibodaux

The Cross-Battery Assessment (XBA) approach to identifying a specific learning disorder (SLD) is based on the postulate that deficits in cognitive abilities in the presence of otherwise average general intelligence are causally related to academic achievement weaknesses. To examine this postulate, we conducted a classification agreement analysis using the Woodcock-Johnson III Tests of Cognitive Abilities and Achievement. We examined the broad cognitive abilities of the Cattell-Horn-Carroll theory held to be meaningfully related to basic reading, reading comprehension, mathematics calculation, and mathematics reasoning across age groups. Results of analyses of 300 participants in three age groups (6–8, 9–13, and 14–19 years) indicated that the XBA method is very reliable and accurate in detecting true negatives. Mean specificity and negative predictive value were 92% and 89% across all broad cognitive abilities and academic domains. Mean sensitivity and positive predictive values, however, were generally quite low, indicating that this method is very poor at detecting true positives. Mean sensitivity and positive predictive value were 21% and 34% across all broad cognitive abilities and academic domains. In sum, results of this study do not support the use of the XBA method for identifying SLD. Implications of our findings for research and practice are discussed.


Psychological Assessment | 2013

How well is psychometric g indexed by global composites? Evidence from three popular intelligence tests.

Matthew R. Reynolds; Randy G. Floyd; Christopher R. Niileksela

Global composites (e.g., IQs) calculated in intelligence tests are interpreted as indexes of the general factor of intelligence, or psychometric g. It is therefore important to understand the proportion of variance in those global composites that is explained by g. In this study, we calculated this value, referred to as hierarchical omega, using large-scale, nationally representative norming sample data from 3 popular individually administered tests of intelligence for children and adolescents. We also calculated the proportion of variance explained in the global composites by g and the group factors, referred to as omega total, or composite reliability, for comparison purposes. Within each battery, g was measured equally well. Using total sample data, we found that 82%-83% of the total test score variance was explained by g. The group factors were also measured in the global composites, with both g and group factors explaining 89%-91% of the total test score variance for the total samples. Global composites are primarily indexes of g, but the group factors, as a whole, also explain a meaningful amount of variance.


WJ III Clinical Use and Interpretation#R##N#Scientist-Practitioner Perspectives | 2003

Interpretation of the Woodcock-Johnson III Tests of Cognitive Abilities: Acting on Evidence

Randy G. Floyd; Renee B. Shaver; Kevin S. McGrew

Publisher Summary This chapter deals with the interpretation of the Woodcock–Johnson III tests of cognitive abilities. The Woodcock–Johnson III Tests of Cognitive Abilities represents the culmination of nearly four decades of systematic psychometric test development. The WJ III COG was developed to provide reliable and valid measures of a number of important cognitive abilities for individuals ranging from preschool-age children to persons in late adulthood. When the accumulated evidence for the validity of the WJ III COG is evaluated within the context of the Standards for Educational and Psychological Testing , it is clear that the WJ III COG has “raised the bar” with regard to state-of-the-art assessment of human cognitive abilities. Because test validation is considered an ongoing process, further research by the test authors and by independent researchers should reveal additional valid uses and interpretations of the battery and, most likely, uses and interpretations that should not be undertaken because little or no validity evidence supports them. Psychologists, educators, and other assessment specialists should continue to seek evidence supporting their interpretations and, when possible, act upon that evidence. Finally, this chapter provides the basis on which these professionals may accomplish these goals.


Journal of School Psychology | 2011

Publication criteria and recommended areas of improvement within school psychology journals as reported by editors, journal board members, and manuscript authors.

Craig A. Albers; Randy G. Floyd; Melanie J. Fuhrmann; Rebecca S. Martínez

Two online surveys were completed by editors, associate editors, editorial board members, and members or fellows of the Division 16 of the American Psychological Association. These surveys targeted (a) the criteria for a manuscript to be published in school psychology journals, and (b) the components of the peer-review process that should be improved. Although prior surveys have targeted these issues in general, none have been conducted in school psychology or examined differences in perspectives between those who serve in a reviewing capacity or those who have served only in an author capacity. Results identified the most important characteristics for a manuscript submitted for publication to be positively reviewed as well as identified differences in the expectations for such characteristics between novice authors (who do not contribute to the journal editorial process) and those authors who serve the journal editorial process more extensively (e.g., editors and associate editors). In addition, key areas to target for improvement (e.g., reducing potential reviewer bias) within the reviewing process were identified.

Collaboration


Dive into the Randy G. Floyd's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nicholas Benson

University of South Dakota

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Timothy Z. Keith

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sarah M. Irby

University of Tennessee Health Science Center

View shared research outputs
Researchain Logo
Decentralizing Knowledge