Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dawn P. Flanagan is active.

Publication


Featured researches published by Dawn P. Flanagan.


Journal of Psychoeducational Assessment | 1995

A Critical Review of the Technical Characteristics of New and Recently Revised Intelligence Tests for Preschool Children

Dawn P. Flanagan; Vincent C. Alfonso

This paper examined the psychometric properties of intelligence tests for preschoolers, including standardization, reliability, test floors, item gradients, and validity. The WPPSI-R, DAS, S-B:IV, WJ-R COG, and BSID-IL were reviewed. The psychometric properties of these instruments are weakest at the lower end of the preschool age range (i.e., 2-6 to 3-6), a finding that is consistent with previous research. The WJ-R COG and BSID-II are among the better instruments for use with very young children because they were rated as technically adequate across most criteria. The psychometric properties were strongest for most instruments at the middle (i.e., 3-6 to 4-6) and upper (i.e., 4-6 to 5-6) levels of the preschool age range. Because all intelligence tests have different strengths and limitations, the technical characteristics of these tests should be considered carefully before one selects an instrument for use with preschoolers. Future research should examine the role of intelligence within a framework of developing abilities in young children, so that its relevance in early educational interventions, as well as diagnosis and classification of preschoolers, can be realized.


Psychology in the Schools | 1996

Convergent validity of the BASC and SSRS: Implications for social skills assessment

Dawn P. Flanagan; Vincent C. Alfonso; Louis H. Primavera; Laura Povall; Deirdre Higgins

The present study examined the psychometric relationship between two new rating scales, the Behavior Assessment System for Children (BASC; Reynolds & Kamphaus, 1992) and the Social Skills Rating System (SSRS; Gresham & Elliott, 1990), for a sample (N = 53) of minority kindergarten children using both parent and teacher ratings. The similarities and differences between these instruments were investigated through correlational and content analyses. In general, the results provide preliminary convergent validity evidence for the BASC and SSRS. In regard to the Social Skills subscale of the BASC, convergent validity evidence was demonstrated for the parent form of this instrument, but not the teacher form, when the SSRS Social Skills scale was used as the criterion. In addition, the correlations between the various scales of the BASC and SSRS were in the expected direction. That is, the correlation between the BASC Adaptive Skills Composite and the SSRS Social Skills scale was moderate in the teacher group (r = .44) and high in the parent group (r = .54). Similarly, correlations between the BASC Hyperactivity, Aggression, and Externalizing scales and the SSRS Problem Behaviors scale ranged from .50 to .60 and .50 to .56 in the teacher and parent groups, respectively. Implications regarding the practical utility of the BASC and SSRS for assessing social skills functioning, in particular, were presented.


Learning and Individual Differences | 2002

The contribution of general and specific cognitive abilities to reading achievement

Michael L. Vanderwood; Kevin S. McGrew; Dawn P. Flanagan; Timothy Z. Keith

Since the development of the Weschler scales, significant advances have been made in intelligence theory and testing technology that have the potential to provide a more comprehensive understanding of cognitive abilities than currently exists. For this study, the standardization sample of the Woodcock–Johnson Psychoeducational Battery-Revised (WJ-R)—an empirically supported measure of several constructs within the Cattell–Horn–Carroll (CHC) theory of cognitive abilities—was used to analyze the contribution of specific cognitive abilities to reading achievement at five developmental levels. Structural equation modeling (SEM), with calibration and cross-validation samples, of four different models of the hypothesized relations among the variables was conducted to determine if specific abilities can provide relevant information regarding the development of reading skills. The results of this study clearly indicate that Gc (comprehension knowledge or crystallized intelligence) and Ga (auditory processing) play an important role in the development of reading skills.


Journal of School Psychology | 1998

Interpreting Intelligence Tests from Contemporary Gf-Gc Theory: Joint Confirmatory Factor Analysis of the WJ-R and KAIT in a Non-White Sample.

Dawn P. Flanagan; Kevin S. McGrew

In the present study, the correlations of test scores between the Woodcock-JohnsonRevised (WJ-R) and the Kaufman Adolescent and Adult Intelligence Test (KAIT) were factor analyzed in order to test the replicability of the contemporary HornCattell Gf-Gc model in a non-White sample and to gain a more complete understanding of the factorial structure of the KAIT. The empirically supported Gf-Gc theoretical model underlying the WJ-R was used as the criterion against which to evaluate the cognitive abilities that are measured by the KAIT. Participants were 114 6th-, 7th-, and 8th-grade students ranging in age from 10 years, 11 months to 15 years, 11 months. Confirmatory factor analyses were used to evaluate and compare eight a priori factor models and one post-hoc factor model. A Gf-Gc nine-factor model was the most plausible a priori model fit of the WJ-R/KAIT data, a finding that extends the replicability of the Gf-Gc model to a non-White sample. The factorial structure of the KAIT put forward by its authors (i.e., a two-factor Gf-Gc model) was not supported. It appears that the KAIT measures Glr or long-term retrieval (associative memory) and Gsm or short-term memory (memory span) in addition to fluid and crystallized abilities. These results provide support for use of the GfGc theory in a non-White sample and interpreting the KAIT from contemporary Gf-Gc theory rather than a two-factor model.


Journal of Psychoeducational Assessment | 2006

Test Review: Wechsler Intelligence Scale for Children, Fourth Edition (WISC-IV)

Alan S. Kaufman; Dawn P. Flanagan; Vincent C. Alfonso; Jennifer T. Mascolo

Within the field of psychological assessment, the Wechsler scales continue to be the most widely used intelligence batteries. The concepts, methods, and procedures inherent in the design of the Wechsler scales have been so influential that they have guided most of the test development and research in the field for more than a half century (Flanagan, McGrew, & Ortiz, 2000). Virtually every reviewer of these scales, including those who have voiced significant concerns about them, have acknowledged the monumental impact that they have had on scientific inquiry into the nature of human intelligence and the structure of cognitive abilities. Kaufman’s (1993) review of the third edition of the Wechsler Intelligence Scale for Children (WISC-III), “King WISC the Third Assumes the Throne,” is a good example of the Wechsler scales’ position of authority and dominance in the field (Flanagan et al., 2000). Although the strengths of the Wechsler scales have always outweighed their weaknesses, critics have identified some salient limitations of these instruments; in particular, they lack a contemporary theory and research base (e.g., Braden, 1995; Burns & O’Leary, 2004; Flanagan & Kaufman, 2004; Keith, Fine, Taub, Reynolds, & Kranzler, 2006; Little, 1992; McGrew, 1994; Shaw, Swerdlik, & Laurent, 1993; Sternberg, 1993; Witt & Gresham, 1985). Nevertheless, when viewed from an historical perspective, the importance, influence, and contribution of David Wechsler’s scales to the science of intellectual assessment are both obvious and profound.


Archive | 2007

Ability—Achievement Discrepancy, Response to Intervention, and Assessment of Cognitive Abilities/Processes in Specific Learning Disability Identification: Toward a Contemporary Operational Definition

Kenneth A. Kavale; Dawn P. Flanagan

The category of specific learning disability (SLD) remains the largest and most contentious area of special education. A primary problem is overidentification of students with SLD as evidenced by the SLD category representing approximately 5% of the school population and 50% of the special education population. Partially responsible for this problem is the overreliance on the ability–achievement discrepancy criterion as the sole indicator of SLD, a practice that remains widespread. Recently, new ways to conceptualize and define SLD have been proposed in an attempt to remedy the overidentification problem (e.g., Fletcher, Coulter, Reschly, & Vaughn, 2004). Most popular is a model that conceptualizes SLD in terms of a failure to respond to intervention (RTI) (Berninger & Abbott, 1994). The purpose of this chapter is to briefly review these two methods of SLD identification, the ability–achievement discrepancy criterion and RTI. It is our belief that neither of these methods, when used as the sole indicator of SLD, can identify this condition reliably and validly. This is because SLD may be present in students with and without a significant ability–achievement discrepancy (see Aaron (1997) for a comprehensive review) and in students who fail to respond and who do respond favorably to scientifically based interventions. We believe the missing component in both of these SLD methods is information on the student’s functioning across a broad range of cognitive abilities and processes, particularly those that explain significant variance in academic achievement. Indeed, the federal definition of SLD is “a disorder in one or more of the basic psychological processes. . . ” (Individuals with Disabilities Education Act [IDEA] 2004). Therefore, this chapter discusses evaluation of cognitive abilities/processes as defined by contemporary Cattell– Horn–Carroll (CHC) theory and its research base. Inherent in this discussion is a summary of the research on the relations between cognitive abilities/processes and academic achievement, information we believe is necessary to (a) determine whether a processing deficit(s) is the probable cause of a student’s academic difficulties and (b) restructure and redirect interventions for nonresponders in an RTI model. Keogh (2005) discussed criteria for determining the adequacy and utility of a diagnostic system, such as the ability–achievement discrepancy and RTI models. The criteria include homogeneity (Do category members resemble one another?), reliability (Is there agreement about who should be included in the category?), and validity (Does category membership provide consistent information?). Keogh (2005, p. 101) suggested that, SLD “is real and that it describes problems that are distinct from


Journal of Psychoeducational Assessment | 2000

Independent Examination of the Factor Structure of the Cognitive Assessment System (CAS): Further Evidence Challenging the Construct Validity of the CAS

John H. Kranzler; Timothy Z. Keith; Dawn P. Flanagan

This study is the first to examine independently the factor structure of the Cognitive Assessment System (GAS; Naglieri & Das, 1997) with a primary dataset not collected by its authors. Participants were 155 students (59 boys, 96 girls), ages 8 to 11 (M = 9.81 years, SD = 0.88), in Grades 3 to 6. Confirmatory factor analysis (CFA) was used to compare the fit provided by the planning, attention, and simultaneous-successive (PASS) model, the theoretical model underlying the CAS, with alternative models of cognitive ability suggested by previous research. Results of this study indicated that the PASS model did not provide a better fit to the data than did alternative hierarchical and nonhierarchical models. Not only were the Planning and Attention factors of the PASS model virtually indistinguishable (r = .88), but they demonstrated inadequate specificity for meaningful interpretation. The model reflecting the actual hierarchical structure of the CAS was found to fit the data no better than alternative models based on different theoretical orientations. Of the hierarchical models examined in this study, the best fitting was a hierarchical (PA)SS model with one second-order general factor, psychometric g, and three first-order factors reflecting Fluid Intelligence/Visual Processing (Simultaneous), Memory Span (Successive), and Processing Speed (Planning/Attention). In sum, results of this study support Kranzler and Keiths (1999) conclusion that the CAS lacks structural fidelity, which means that the CAS does not measure what its authors intended it to measure. Results of this study, therefore, provide further evidence challenging the construct validity of the CAS.


School Psychology International | 1995

Incidence of Basic Concepts in the Directions of New and Recently Revised American Intelligence Tests for Preschool Children

Dawn P. Flanagan; Tammy Kaminer; Vincent C. Alfonso; Damon E. Raderc

The purpose of this paper was to provide comparative data as a follow-up to Bruce A. Bracken (1986) regarding basic concepts contained in the test directions of five new or recently revised American intelligence tests for preschoolers. Two measures of basic concepts, the Bracken Basic Concept Scale (BBCS; Bracken, 1984) and the Boehm Test of Basic Concepts-Preschool Version (Boehm-Preschool; Boehm, 1986), were used to assess: (a) the presence of basic concepts in the directions of intelligence tests; (b) the percentage of preschool-age children who understand these terms; and (c) the frequency with which basic concepts occur throughout test administration procedures. Results indicated that use of the Boehm-Preschool alone or an examination of only the presence of basic concepts and the proportion of children who understand them provide limited information about the difficulty of test directions. It is not until one tabulates the total number of times (that is, frequency) that each basic concept occurs in test directions that their true complexity is realized. This review showed that all intelligence test directions contain excessive use of difficult basic concepts. Although data are not available to examine the extent to which children from countries other than America are likely to understand test directions, in light of the present results, it seems reasonable to assume that all young children may have difficulty comprehending intelligence test directions, regardless of country of origin.


Journal of Psychoeducational Assessment | 1997

Improvement in Academic Screening Instruments? A Concurrent Validity Investigation of the K-FAST, MBA, and WRAT-3

Dawn P. Flanagan; Kevin S. McGrew; Elaine Abramowitz; Lois Lehner; Stephanie Untiedt; Dave Berger; Howard Armstrong

The present study examined the extent to which the scores from the K-FAST, MBA, and WRAT-3 are comparable in degree of correlation and mean scores in a sample of university students (N = 62). Results generally provided concurrent validity evidence for the three academic screening tests, although several significant differences were found between measures. An examination of test intercorrelations and test content revealed that the reading and writing domains are not assessed similarly across batteries. The intercorrelations between reading scores were relatively low, ranging from .31 (K-FAST and WRAT-3) to .48 (MBA and WRAT-3). The MBA provides the broadest assessment of reading (word recognition, word comprehension, passage comprehension), followed by the K-FAST. The WRAT-3 Reading test assesses mainly word recognition, as demonstrated by a .68 correlation with the MBA Identification test. The MBA also provides the broadest assessment of writing (punctuation, capitalization, spelling, word usage), followed by the WRAT-3 (which assesses only spelling). The math scores from the three batteries were most consistently correlated (rs ranging from .52 to .54), a finding that supports the validity of the mathematics scores obtained from these instruments. Recommendations for the use and interpretation of the K-FAST, MBA, and WRAT-3 are offered, and avenues for future research are suggested.


Psychology in the Schools | 1993

WIAT subtest and composite predicted‐achievement values based on WISC‐III verbal and performance IQs

Dawn P. Flanagan; Vincent C. Alfonso

Critical values tables for determining significant differences between Wechsler IQs and WIAT subtests and composites based on a predicted-achievement method are provided in the WIAT manual for the Full Scale IQ and have been constructed recently for Verbal and Performance IQs (Flanagan & Alfonso, 1993). In order to use these tables, however, a predicted achievement score(s) is required. The process of calculating predicted-achievement scores is time-consuming and may result in errors, especially when more than one ability-achievement comparison is warranted. The present paper provides tables of WIAT subtest and composite predicted-achievement standard scores based on WISC-III Verbal and Performance IQs. These tables allow examiners to determine quickly ability-achievement discrepancies based on WISC-III Verbal or Performance IQs when they are used in conjunction with the critical values tables provided in our earlier article. These tables are most useful for the accurate assessment and diagnosis of learning disabilities.

Collaboration


Dive into the Dawn P. Flanagan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James B. Hale

Philadelphia College of Osteopathic Medicine

View shared research outputs
Top Co-Authors

Avatar

Timothy Z. Keith

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge