Kathy E. Green
University of Denver
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kathy E. Green.
Educational and Psychological Measurement | 2009
Pamela S. Van Horn; Kathy E. Green; Monica Martinussen
This article reports results of a meta-analysis of survey response rates in published research in counseling and clinical psychology over a 20-year span and describes reported survey administration procedures in those fields. Results of 308 survey administrations showed a weighted average response rate of 49.6%. Among possible moderators, response rates differed only by population sampled, journal in which articles were published, sampling source and method, and use of follow-up. Researchers whose studies were included in this meta-analysis used follow-up but rarely used incentives, prenotification, or other response-facilitation methods to maximize response rates. Although the future of survey research in general may rely more heavily on Internet data collection, mail surveys dominate in this field.
Journal of Educational Administration | 2002
Jennifer L. Edwards; Kathy E. Green; Cherie A. Lyons
Examines the personal empowerment and efficacy of teachers, and relates these constructs to environmental characteristics in order to provide information for principals to assist teachers in personal growth. Presents multiple regressions for the Vincenz empowerment scale with The School Culture Survey, teacher efficacy scale, learner‐centered battery, paragraph vompletion method, as well as for satisfaction and age‐related variables. Multiple Rs were low to moderate for all variables except for the paragraph completion method, which were nonsignificant. Significant predictors of personal empowerment were administrator professional treatment of teachers, reflective self‐awareness, honoring of student voice, personal teaching efficacy, and satisfaction with teaching as a career. Presents strategies for principals to use in helping teachers increase in empowerment.
Psychology & Marketing | 1996
Kathy E. Green
Effects of sociodemographic factors on mail survey response rate, response speed, and data quality are summarized. Consistent with previous reviews, evidence for effects of education is strongest with possible effects of age and gender. Evidence for effects of other sociodemographic factors is either ambiguous or absent. Speed and quality of response are also associated with educational level and possibly age. Sociodemographic factors are briefly discussed in light of three theories of mail survey response behavior.
Journal of Educational Computing Research | 2002
Catherine G. Frantom; Kathy E. Green; Eleanor R. Hoffman
The impact of technology on the current and future lives of societys youth makes it especially important to understand technology from their perspective. Although todays children have become the benefactors of an evolving technological society, few studies have addressed the assessment of their attitudes toward technology. This study describes the development of the Childrens Attitude Toward Technology Scale (CATS) with 574 children in a rural school district. Principal components analysis of the CATS followed by varimax rotation indicated that item intercorrelations could be explained by two factors entitled “interest/aptitude” and “alternative preferences.” Subscales at two test administrations demonstrated good internal consistency and moderate test-retest stability. Significant differences were found on interest scores when comparing elementary and middle school students and on items reflecting alternative preferences. In addition, attitudes varied according to gender on subscales. Initial analyses suggest this new measure to effectively assess childrens interests/aptitudes toward technology. Further study is needed to validate the measure.
Medical Care | 1985
Michael K. Chapko; Marilyn Bergner; Kathy E. Green; Barbara H. Beach; Peter Milgrom; Nicholas Skalabrin
As part of the Washington State Dental Auxiliaries Project, a 42-item measure of patient satisfaction with dental care was developed. The measure is comprised of 13 subscales: dentist–patient relations, technical quality of care, access, patient waiting time, cost, facilities, availability, continuity, pain, auxiliaries performing expanded duties, staff–patient relations, staff technical quality of care, and office atmosphere. The measure was developed from a set of 52 items included in a questionnaire administered to the patients of private dental practices in Washington state. Usable questionnaires were returned by 30.8percnt; of patients receiving questionnaires in 1979, 40.1% in 1980, and 34.0% in 1981. Factor analysis plus categorization of items by a panel of professionals were used initially to group items into subscales. Contribution to internal consistency was the final criterion for an items inclusion in a subscale. Internal consistency of subscales ranged from 0.44 to 0.80. The concurrent validity of subscales was assessed by relating patient satisfaction to characteristics of the dental practices. The following statistically significant relationships between subscales and criterion variables were observed: dentist–patient relations and percent of patients seen by the dentist; access and number of weeks appointments must be booked in advance; patient waiting time and actual patient waiting time; continuity of care and percent of patients seen by the dentist; auxiliaries performing expanded duties and delegation to auxiliaries; and staff technical quality and percent of hygienist restorations with satisfactory quality. Each relationship was in the expected direction.
The Modern Language Journal | 2002
Hideko Shimizu; Kathy E. Green
The attitudes of 251 second language teachers toward kanji and their choices of instructional strategies for teaching kanji were explored in this study. Principal component analysis resulted in the identification of 6 statistically reliable domains representing underlying attitudes toward teaching kanji (cultural tradition, difficulty of kanji, affective orientation, aptitudes, usefulness of kanji, and expectation for the future of kanji) and 3 instructional strategies (context, memory, and rote learning). Descriptive statistics revealed that the most positive attitude was toward the “usefulness of kanji” and that the most common instructional strategy was “rote learning.” Canonical correlation revealed a statistically significant correlation between 3 attitude variables—affective orientation, usefulness of kanji, and cultural tradition—and 2 instructional strategies—memory and context strategies. The results showed that: (a) the underlying attitudes toward teaching kanji and teaching strategies were multidimensional and complex, and (b) teachers who appreciated the cultural tradition in kanji and its practical utility tended to have a more positive affect and were more likely to utilize memory and contextual strategies for teaching kanji, although rote learning strategies were the most frequent among all teachers. [ABSTRACT FROM AUTHOR]
Research in Higher Education | 2000
Erica M. Johnson; Kathy E. Green; Raymond C. Kluever
The Procrastination Inventory developed for use with doctoral students in clinical psychology was modified for use with ABD students and doctoral graduates in a College of Education. The original Procrastination Inventory contained 43 items with 11 subscales. The structure of the revised measure was analyzed both through factor and Rasch analyses and three subscales that were more generalized were found instead of the eleven originally posited. The three subscales were: (1) procrastination, 20 items, alpha = .88, (2) perfectionism, 9 items, alpha = .64, and (3) graduate school comfort, 6 items, alpha = .59. Eight items were deleted after Rasch and factor analyses, resulting in a 35-item scale. Validity was demonstrated by the measures ability to predict dissertation completion and through correlations with related measures. The Procrastination Inventory is useful in the study of attrition from doctoral programs, particularly at the dissertation stage.
Journal of Educational and Behavioral Statistics | 1987
Kathy E. Green; Richard M. Smith
This paper compares two methods of estimating component difficulties for dichotomous test data. Simulated data are used to study the effects of sample size, collinearity, a measurement disturbance, and multidimensionality on the estimation of component difficulties. The two methods of estimation used in this study were conditional maximum likelihood estimation of parameters specified by the linear logistic test model (LLTM) and estimated Rasch item difficulties regressed on component frequencies. The results of the analysis indicate that both methods produce similar results in all comparisons. Neither of the methods worked well in the presence of an incorrectly specified structure or collinearity in the component frequencies. However, both methods appear to be fairly robust in the presence of measurement disturbances as long as there is a large number of cases (n = 1,000). For the case of fitting data with uncorrelated component frequencies, 30 cases were sufficient to recover the generating parameters accurately.
Psychological Reports | 1990
Kathy E. Green; David H. Schroeder
This study investigated the psychometric quality of the Verbalizer-Visualizer Questionnaire. Analysis indicated the scale was composed of two distinct subscales (visual style and verbal style), not one bipolar scale. While the verbal style subscale predicted verbal ability, the validity of the visual style subscale was not demonstrated. Both subscales should be revised and further study conducted with the visual style subscale.
Journal of General Psychology | 1992
Kathy E. Green; Raymond C. Kluever
Abstract The purpose of this study was to identify and test item characteristics that predict the difficulty of items on Ravens Colored Matrices (CPM; 1965) and items on Ravens Standard Progressive Matrices (SPM; 1965). CPM item characteristics were defined and rated; Rasch item difficulties were used as the dependent variable, with misfitting items omitted. The multiple R was .90 (.88 using stepwise prediction). When the same predictors were used with SPM items, the multiple R was .69. The results of this study are discussed with respect to cognitive processes and with respect to using item characteristics to create new test items.