Kinnard White
University of North Carolina at Chapel Hill
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kinnard White.
Educational and Psychological Measurement | 1999
Carl W. Swartz; Stephen R. Hooper; Melissa B. Wakely; Renée E.L de Kruif; Martha Reed; Timothy T. Brown; Melvin D. Levine; Kinnard White
Issues surrounding the psychometric properties of writing assessments have received ongoing attention. However, the reliability estimates of scores derived from various holistic and analytical scoring strategies reported in the literature have relied on classical test theory (CT), which accounts for only a single source of variance within a given analysis. Generalizability theory (GT) is a more powerful and flexible strategy that allows for the simultaneous estimation of multiple sources of error variance to estimate the reliability of test scores. Using GT, two studies were conducted to investigate the impact of the number of raters and the type of decision (relative vs. absolute) on the reliability of writing scores. The results of both studies indicated that the reliability coefficients for writing scores decline as (a) the number of raters is reduced and (b) when absolute decisions rather than relative decisions are made.
Journal of School Psychology | 1971
Kinnard White; Richard Allen
Abstract This study tested two hypotheses: ( a ) pre-adolescent boys will show greater growth in positive self-concept as a result of a counseling centered art program than as a result of an intensive non-directive counseling program, and ( b ) this growth effect will continue into adolescence. S s were 30 boys who had just completed the sixth grade. The treatment took place over a daily eight weeks summer session. A follow-up was conducted 14 months later. A pre-test, post-test follow-up design using the ten scales of the Tennessee Self Concept Scale as the dependent variables was used. An analysis of covariance (ANCOVA) supported both hypotheses.
Educational and Psychological Measurement | 1990
Carl W. Swartz; Kinnard White; Gary B. Stuck; Toni Patterson
The factorial structure of the performance ratings of 27 teaching practices contained on the North Carolina Teaching Performance Appraisal Instrument (TPAI) is reported in this paper. As currently used, ratings using the TPAI yield five scores: (a) Management of Instructional Time, (b) Management of Student Behavior, (c) Instructional Presentation, (d) Instructional Monitoring, and (e) Instructional Feedback. This five function scoring scheme is based on logical judgement and not empirical evidence. The results a study using factor analysis procedures suggest that a five factor solution that paralleled the current scoring scheme was not as parsimonious as a two factor solution. The clarity and meaningfulness of the interpretation of the two factor solution provides supportive evidence for the construct validity of the TPAI as well as suggestions for a more utilitarian procedure for using the instrument in large scale teaching performance assessment programs.
Journal of Educational Research | 2001
John P. Galassi; Kinnard White; Elizabeth M. Vesilind; Michael E. Bryan
Abstract The views of university faculty and public school personnel toward collaborative research in the second year of a professional development schools (PDS) partnership were compared by means of a survey and a structured interview. Although participants were involved in a collaborative partnership, they subscribed to a traditional, rather than a collaborative, conception of educational research. Whether a collaborative model of research can flourish in a PDS depends on participants’ ability (a) to develop a common mindset about what collaborative research is and how it relates to practice and (b) to achieve a positive balance of its costs and benefits. It is important in the early stages of a PDS to identify whether school and university faculty have different views about research so they can develop a common research perspective that will enable collaboration to flourish over the long term.
Elementary School Journal | 1983
Richard H. Coop; Kinnard White; Barbara Tapscott; Linda Lee
The Elementary School]ournal Volume 84, Number 1 ? 1983 by The University of Chicago. All rights reserved. 0013-5984/84/8401-0008
Psychological Reports | 1977
Robert H. Bradley; Gary B. Stuck; Richard H. Coop; Kinnard White
01.00 One of the many criticisms of public school education over the past several years centers on the inability of students to communicate through writing. In fact, public concern over the level of writing competency displayed by recent graduates of secondary schools has led 23 of 50 states to initiate some form of statewide assessment
Journal of Educational Research | 1966
Kinnard White
Rotter ( 6 ) implied that measures of generalized locus of control may be inappropriate for many smdies. In certain situations more accurate behavioral predictions may be obtained by measuring more specific expectancies. This paper describes a scale designed to assess locus of control orientation in three achievement domains, intellectual, social, and physical, each of which is characterized by different beliefs and efforts. The Locus of Control Inventory for Three Achievement Domains has 48 items to be answered yes or no. Half the items in each domain measure control orientation for successful outcomes, the other half for unsuccessful outcomes. The scale was based on an earlier 60-item version. Items had point-bisedal correlations of at least .30 with total subscale and a discrimination index of at least .30. The scale was given to 373 students ages 12 to 18 yr, to compute KR-20 reliability coefficients, Intellectual subscale ( r = .53), Social subscale ( r = .54), Physical subscale ( r = .52), and Total scale (s = .75). Intercorrelations among subscales ranged from .44 to .57. Support for the scales validity derives from a correlation between the Intellectual subscale and Crandalls ( 4 ) Intellectual Achievement Responsibility scale ( r = .78, df = 58, p < .01; 1). Correlations for the other two subscales were smaller (.45 and .54). Correlations between the subscales and the Childrens Scale ( 5 ) ranged from .43 to .49. Bradley and Teeter (2 ) also reported values as high as .40 for 223 junior high and high school students between teachers ratings of considerate behavior and scores on the Social subscale. Data from 306 individuals from 13 to 90 yr. old indicated different age-related trends for the three subscales ( 3 ) . Bradley and Gaa (1) also found that s ~ d e n t s who had 6 weekly goal-setting conferences with their teacher scored more internal only on the Intellectual subscale than those who had no conference. Correlations among locus of control measures indicate approximately 25% shared variance plus considerable unique variance for each of the three domains assessed, thus corroborating Rotters position a b u t generalized and specific control expectancies.
NASSP Bulletin | 1987
Kinnard White; Marvin D. Wyne; Gary B. Stuck; Richard H. Coop
AbstractSs, 81 first year and 62 second year female elementary school teachers, were administered a scale of career involvement in the spring of the school year. This variable, career involvement, was then analyzed in relation to persistence in the teaching profession for a second or third consecutive year. Those teachers who dropped out of the teaching profession after one and two years of teaching differed from those who remained in the profession for a second or third consecutive year on the career involvement variable (p = .01; two tailed).
Educational and Psychological Measurement | 1988
Kinnard White; Dean R. Smith; Tandra Cunningham
Not all the skills considered essential for effective teaching can be part of a single instrument, say these authors, but the instru ment described here includes what their study found to be the generic practices that teachers need to perform successfully in the classroom. It is especially useful for helping beginning teachers, they write.
Peabody Journal of Education | 1999
John P. Galassi; Laura Brader-Araje; Linda Brooks; Priscilla Dennison; M. Gail Jones; Dorothy J. Mebane; Jean Parrish; Melissa Richer; Kinnard White; Elizabeth M. Vesilind
Two studies related to the validity argument for ratings of classroom teaching performance derived from using the North Carolina Teaching Performance Appraisal Instrument (TPAI) are reported. Study I examined the relationship of ratings on the TPAI and pupil achievement. The results of Study I indicated that teaching performance ratings on the TPAI accounted for .70 of the variance of pupil achievement after adjusting pupil achievement for student ability. Study II examined the sensitivity of teaching performance ratings on the TPAI to improvement due to coaching by clinical faculty. The results of Study II indicated that ratings on Management of Instructional Time and Providing Instructional Feedback were the most sensitive to coaching while ratings on Instructional Presentation and Instructional Monitoring were weakly related to coaching; ratings on Management of Student Behavior were not changed by coaching.