Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John H. Kranzler is active.

Publication


Featured researches published by John H. Kranzler.


Intelligence | 2001

Meta-analysis of the relationship between intelligence and inspection time

Jennifer L Grudnik; John H. Kranzler

Abstract This study replicated and extended Kranzler and Jensens [Intelligence 13 (1989) 329] meta-analysis of the relationship between inspection time (IT) and intelligence (IQ). Separate meta-analyses were conducted on obtained correlations (rs) between IT and general IQ for the total sample and for studies using samples of adults and children. Two new meta-analyses were also conducted. The first compared the IT–IQ r between visual and auditory IT; the second compared the r between strategy users and nonusers. For the total sample (N>4100), the r was −.51 after correction for artifactual effects (−.30 prior to correction). No statistically significant difference was observed between the mean corrected r of −.51 for adults and −.44 for children. The mean corrected r for visual and auditory IT measures were −.49 and −.58, respectively, suggesting that the relationship between IT and IQ is comparable across type of IT task. The mean corrected r of −.77 for strategy nonusers was statistically significantly higher than the r of −.60 for strategy users. Implications of these findings for future research are discussed.


Psychological Assessment | 2010

Independent examination of the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV): what does the WAIS-IV measure?

Nicholas Benson; David M. Hulac; John H. Kranzler

Published empirical evidence for the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) does not address some essential questions pertaining to the applied practice of intellectual assessment. In this study, the structure and cross-age invariance of the latest WAIS-IV revision were examined to (a) elucidate the nature of the constructs measured and (b) determine whether the same constructs are measured across ages. Results suggest that a Cattell-Horn-Carroll (CHC)-inspired structure provides a better description of test performance than the published scoring structure does. Broad CHC abilities measured by the WAIS-IV include crystallized ability (Gc), fluid reasoning (Gf), visual processing (Gv), short-term memory (Gsm), and processing speed (Gs), although some of these abilities are measured more comprehensively than are others. Additionally, the WAIS-IV provides a measure of quantitative reasoning (QR). Results also suggest a lack of cross-age invariance resulting from age-related differences in factor loadings. Formulas for calculating CHC indexes and suggestions for interpretation are provided.


The Journal of Pediatrics | 2000

Is short stature a handicap? A comparison of the psychosocial functioning of referred and nonreferred children with normal short stature and children with normal stature

John H. Kranzler; Arlan L. Rosenbloom; Briley E. Proctor; Frank B. Diamond; Melanie Watson

OBJECTIVES Normal short stature (NSS), defined as height below the 5th percentile for age and sex norms that is not due to illness, hormonal deficiency, or part of a dysmorphic syndrome, has been thought to have a deleterious effect on psychosocial functioning based on observations of referred populations. Recent studies of nonreferred children with NSS, however, have demonstrated normal function. This study directly compared the psychosocial functioning of referred children with NSS, nonreferred children with NSS, and children with normal stature. STUDY DESIGN Participants, 90 children (46 boys, 44 girls) between 6 and 12 years of age (mean, 9. 6 years), were administered intelligence and achievement tests. Parents and teachers assessed adaptive and problem behaviors. Family adaptability and cohesiveness were measured. RESULTS Intelligence and achievement for referred and nonreferred children with NSS were average. Referred children with NSS were reported to have more externalizing behavior problems and poorer social skills than nonreferred children with NSS and children in the control group. Family adaptability and cohesiveness were comparable across groups. CONCLUSIONS Children with NSS have normal psychosocial function, and results suggest that externalizing behavior problems, attention problems, and poor social skills in children referred to clinics for NSS are inappropriately attributed to short stature.


Intelligence | 1991

The nature of psychometric g: Unitary process or a number of independent processes? ☆

John H. Kranzler; Arthur R. Jensen

This study investigates whether a unitary elemental process or a number of independent elemental processes, as measured by elementary cognitive tasks (ECTs), underlie psychometric g. A sample of 101 university students was administered two intelligence tests (Ravens Advanced Progressive Matrices and the Multidimensional Aptitude Battery) and a large battery of ECTs. The results of this study reject the theory that some single or unitary process underlies psychometric g. Rather, it appears that individual differences in psychometric g may reflect as many as four independent components of variance. These findings support the theory that various complex mental tests correlate highly with each other, giving rise to a psychometric g factor, because they require some of the same elemental processes. Further research will be needed to determine precisely the number and nature of these components. It is also important to note that the multiple correlation of g regressed on these four components derived from elementary cognitive variables is .542. The maximum correlation possible between the psychometric variables and the battery of ECTs in this study is nearly as high as correlations among various standardized IQ tests themselves (canonical r = .603). After correction for the considerable restriction of range on IQ in the sample, the r is increased to .722. Hence, this battery of ECTs accounts for approximately half of the phenotypic variance in g and probably as much as 70% of the genotypic variance. Moreover, the finding that individual differences in conceptually distinct processes (such as speed of visual search and speed of memory search) are highly correlated indicates the presence of individual differences ~jn some neurological level of processing common to both tasks.


Journal of School Psychology | 1998

The Construct Validity of Curriculum-Based Measurement of Reading: An Empirical Test of a Plausible Rival Hypothesis

John H. Kranzler; Mary T. Brownell; M. David Miller

Abstract Research has confirmed that curriculum-based measurement (CBM) of oral reading fluency and measures of reading comprehension are highly correlated, as predicted by developmental theories of reading. Research on CBM, however, has only begun to rule out plausible alternative explanations of this relationship—an important aspect of a strong program of construct validation (e.g., Messick, 1989 ). This study investigated one such rival hypothesis by examining the relative roles of general cognitive ability, speed and efficiency of elemental cognitive processing, and oral reading fluency in the prediction of reading comprehension. Results of simultaneous multiple regression analyses substantiate the construct validity of CBM oral reading fluency. These findings indicate that the significant relationship between oral reading fluency and reading comprehension cannot be explained by general cognitive ability or by processing speed and efficiency. CBM oral reading fluency also did not correlate significantly with any of the processing speed and efficiency tasks. Interestingly, however, CBM oral reading fluency accounted for less variance in reading comprehension (r2 = .17) than expected based on the results of previous research and less than that explained by general cognitive ability (r2 = .24). When controlling for psychometric g and processing speed in the regression analyses, CBM oral reading explained 11% of the variance in reading comprehension. Implications of these results for further research on the construct validity of CBM are discussed.


Personality and Individual Differences | 1992

A test of Larson and Alderton's (1990) worst performance rule of reaction time variability

John H. Kranzler

Abstract The validity of Larson and Aldertons (1990; Intelligence, 14, 309–325) ‘Worst Performance’ rule of reaction time (RT) variability, which states that the worst RT trials correlate more highly with intelligence than the fastest RT trials, was tested by examining the potentially confounding effect of range restriction across the RT bands of elementary cognitive tasks (ECTs) of various degrees of information processing complexity. Results of this study indicate that the intersubject variability within RT bands is not systematically related to the pattern of correlations of RT with intelligence. In addition, results of an analysis of the movement time (MT) bands provide evidence of the divergent validity of the worst performance rule. These data also support Jensens (1985; Methodological and statistical advances in the study of individual differences. New York: Plenum. 1987; Speed of information processing and intelligence. Norwood, NJ: Ablex) argument for the separate measurement of RT and MT in all speed of information processing research. In sum, results of this study sunstantiate the Worst Performance rule for individual differences in RT variability and intelligence.


Intelligence | 1991

Unitary "g": Unquestioned Postulate or Empirical Fact?.

John H. Kranzler; Arthur R. Jensen

Abstract Carroll (1991) has argued that our empirical test of the hypothesis that psychometric g is a unitary factor fails methodologically to prove that g is not unitary, and that our finding could have resulted from some impurity in the g extracted from 11 psychometric tests. The gist of this argument is that our multiple regression method for testing the unity of g and the outcome of this test would be valid only if it were certain that we had a “perfect estimate” of g as the dependent variable. We argue that the hypothetical ideal of a perfectly pure g is empirically unattainable, but that such purity is an unnecessary condition for testing the hypothesis by the method we used. Our analyses suggest that one would have to assume an improbably large amount of “impure” variance in our g factor to make Carrolls argument compelling. Finally, we are most grateful for Carrolls elegant hierarchical factor analysis of our psychometric and chronometric variables. The unity of g cannot be proved or disproved by factor analytic methods per se and the unitary g hypothesis has only the status of a parsimonious assumption within that framework. But Carrolls factor analysis of our data indeed beautifully represents the relationship between conventional psychometric tests and elementary cognitive tasks based on chronometric techniques and further highlights the central role of efficiency (= speed and consistency) of information processing in g .


Journal of Psychoeducational Assessment | 2006

Effect of Instructions on Curriculum-Based Measurement of Reading.

Elayne Proesel Colón; John H. Kranzler

The aim of this study was to investigate the effect of instructions of curriculum-based measurement (CBM) of reading on (a) the number of words read correctly and incorrectly per minute and (b) the relationship between CBM reading and reading achievement. Results indicated that the specific instructions used have a significant impact on CBM reading outcomes. Statistically significant mean differences were found among the fast, best, and baseline reading conditions in the number of words read correctly and in the number of errors. Correlations between words read correctly per minute and a test of reading achievement were statistically significant and substantial for all three conditions, but differences among their correlations were not. These results underscore the importance of using standardized instructions on CBM results both within and across settings. Implications of these results for the responsiveness-to-intervention method for identifying children with learning difficulties are discussed.


Journal of School Psychology | 2011

Research productivity and scholarly impact of APA-accredited school psychology programs: 2005–2009

John H. Kranzler; Sally L. Grapin; Matt L. Daley

This study examined the research productivity and scholarly impact of faculty in APA-accredited school psychology programs using data in the PsycINFO database from 2005 to 2009. We ranked doctoral programs on the basis of authorship credit, number of publications, and number of citations. In addition, we examined the primary publication outlets of school psychology program faculties and the major themes of research during this time period. We compared our results with those of a similar study that examined data from a decade earlier. Limitations and implications of this study are also discussed.


Journal of Psychoeducational Assessment | 2000

Independent Examination of the Factor Structure of the Cognitive Assessment System (CAS): Further Evidence Challenging the Construct Validity of the CAS

John H. Kranzler; Timothy Z. Keith; Dawn P. Flanagan

This study is the first to examine independently the factor structure of the Cognitive Assessment System (GAS; Naglieri & Das, 1997) with a primary dataset not collected by its authors. Participants were 155 students (59 boys, 96 girls), ages 8 to 11 (M = 9.81 years, SD = 0.88), in Grades 3 to 6. Confirmatory factor analysis (CFA) was used to compare the fit provided by the planning, attention, and simultaneous-successive (PASS) model, the theoretical model underlying the CAS, with alternative models of cognitive ability suggested by previous research. Results of this study indicated that the PASS model did not provide a better fit to the data than did alternative hierarchical and nonhierarchical models. Not only were the Planning and Attention factors of the PASS model virtually indistinguishable (r = .88), but they demonstrated inadequate specificity for meaningful interpretation. The model reflecting the actual hierarchical structure of the CAS was found to fit the data no better than alternative models based on different theoretical orientations. Of the hierarchical models examined in this study, the best fitting was a hierarchical (PA)SS model with one second-order general factor, psychometric g, and three first-order factors reflecting Fluid Intelligence/Visual Processing (Simultaneous), Memory Span (Successive), and Processing Speed (Planning/Attention). In sum, results of this study support Kranzler and Keiths (1999) conclusion that the CAS lacks structural fidelity, which means that the CAS does not measure what its authors intended it to measure. Results of this study, therefore, provide further evidence challenging the construct validity of the CAS.

Collaboration


Dive into the John H. Kranzler's collaboration.

Top Co-Authors

Avatar

Nicholas Benson

University of South Dakota

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sally L. Grapin

Montclair State University

View shared research outputs
Top Co-Authors

Avatar

Timothy Z. Keith

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge