Grant Henning
University of California, Los Angeles
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Grant Henning.
Language Testing | 1985
Zheng Chen; Grant Henning
The extent to which language proficiency/placement tests may be biased for or against examinees from particular language or cultural groups has never to our knowledge been the focus of empirical research. The purpose of the present study has been to examine the English as a Second Language Placement Examination (ESLPE) employed at the University of California, to determine the nature, direction and extent of bias present for members of two linguistically and culturally diverse subgroups of the sample of examinees. By comparison of the response patterns of 34 native speakers of Spanish and 77 native speakers of Chinese from among a total sample of 312 students tested with one form of the test, it was possible to identify test items exhibiting bias in their respective skill domains. Included is a discussion of the nature, direction, extent and implications of the bias detected.
TESOL Quarterly | 1986
Grant Henning
This article attempts to clarify and define quantitative research as it is applied in the study of language acquisition. Trends in the use of quantitative and nonquantitative methods in applied linguistics are reported, and suggestions are made concerning useful paradigms and procedures for further research in language acquisition. In recent years considerable concern has arisen over the misapplication or avoidance of appropriate quantitative methods in language acquisition research. Brown (1986) expressed concern that established conventions in quantitative research methodology were not consistently adhered to by quantitative researchers in applied linguistics. His primary data source was articles appearing in the major professional journals of applied linguistics, such as the TESOL Quarterly and Language Learning. Similarly, Ediger, Lazaraton, and Riggenbach (1986) noted that there is a paucity of formal statistical preparation on the part of the majority of educators responsible for guiding graduate research in applied linguistics in general. Their survey of a large cross section of university graduate thesis and dissertation advisers in applied linguistics revealed that respondents had completed, on an average, fewer than two formal courses in research design or statistics and that the majority reported no formal preparation at all. Henning (1985) reported on common problems in quantitative language acquisition research, including unreliability and invalidity of data-elicitation techniques, failure of experimental studies to state a formal hypothesis for testing, failure to report frequencies with percentages or standard deviations with means, and insufficient use of appropriate inferential statistics. In light of these concerns, this article has three purposes: (a) to provide a definition of quantitative research, as opposed to qualitative or anecdotal research; (b) to report on trends in
Language Testing | 1985
Grant Henning; Thom Hudson; Jean L. Turner
Considerable controversy has arisen around the assumption of unidimen sionality underlying the application of latent trait models of measurement. The intent of the present paper is to provide a clearer articulation of the unidimensionality assumption and to investigate the robustness and appli cability of a particular unidimensional model, the Rasch Model, for use with language proficiency tests that consist of batteries of subtests in a variety of skill areas and that are applied in the testing of the abilities of students from diverse educational, linguistic and cultural backgrounds. Results of the analysis of response data from the administration of a 150- item, five-subskill ESL proficiency/placement examination to 312 entering university students indicated that unidimensionality constraints were not violated.
Language Testing | 1985
Fred Davidson; Grant Henning
This study has sought to demonstrate the utility of Rasch Model scalar analysis when applied to self-ratings of ability/difficulty associated with component skills of English as a second language. Eleven skill areas were rated for difficulty on a seven-point Likert-type scale by 228 ESL students at the University of California. Following appropriate tests of unidimen sionality, both skill area items and rating categories were calibrated for difficulty, examined for fit to the Rasch Model, and plotted to provide visual representation of the nature of the item characteristic curves. Specific suggestions were made for the improvement of the rating cate gories of the self-rating scale, and skill areas most susceptible to self- rating error were identified. It was concluded that scalar analysis of the kind considered here is feasible with self-rating data, and that other rating scale procedures such as those employed. to rate proficiency in foreign language speaking or writing would probably benefit from similar scalar analyses.
Language Testing | 1984
Grant Henning
Solutions to a number of important problems in language test develop ment not resolved by classical measurement theory are to be found in latent trait measurement theory. An ESL reading comprehension test is analysed first using classical measurement procedures and then with Rasch Model latent trait procedures. The advantages gained through use of the latter are demonstrated and discussed.
Language Testing | 1989
Grant Henning
This paper reviews a variety of definitions of local (conditional) independence presented in the testing literature. An attempt is made to clarify and differen tiate among conflicting conceptualizations of this fundamental measurement principle. In particular, it is argued that local independence, unidimension ality, and noninvasiveness are important but distinct concepts that may, but need not necessarily, overlap. Methods of testing for the presence of local independence in several of its conceptualizations are also presented.
Language Testing | 1988
Grant Henning
The present study was designed to test the effects of violation of the unidimen sionality assumption on Rasch Model estimates of item difficulty and person ability. Also considered was the sensitivity of the Bejar Method, Rasch Model fit statistics, classical internal consistency estimation and principal components analysis in detecting the nature and extent of violations of unidimensionality. For the study of test item dimensionality, use was made of a simulated testing situation involving a two-dimensional 60-item test administered to an illustra tive 120-person sample. For investigation of person sample dimensionality, the simulation involved use of a 120-item test with an illustrative 60-person sample. Results clearly suggested that violations of item unidimensionality produced distorted estimates of person ability, and violations of person unidimen sionality produced distorted estimates of item difficulty. The Bejar Method was found to be sensitive to such distortions, and results of applying the Bejar Method along with internal consistency estimation and principal components analysis were mutually confirmatory.
Language Testing | 1988
Brian K. Lynch; Fred Davidson; Grant Henning
In an attempt to extend our knowledge about potential implications of person dimensionality for language test validation, the study reported here was designed: (1) to identify person similarities in a language proficiency testing data set; (2) to attempt to relate such person dimensions to demographic information on the persons so as to provide appropriate labels for the dimensions; and (3) to investigate differential item functioning for the person dimensions identified and labelled. This final focus of the study was intended to reveal whether test items were functioning in an equivalent and valid manner for all person dimensions and, if not, whether there were any discernible patterns of item types that were functioning differentially for particular person dimensions.
TESOL Quarterly | 1982
Grant Henning
A method was devised for inter-skill comparative evaluation of instructional programs. The method, termed “Growth-referenced evaluation”, is distinguished by its attempt to focus on comparative rate of growth and to provide empirical indication of areas for program intervention and reform. The method is explained by reference to a sample of 485 adult EFL students in a four-year undergraduate program. For the present context, intervention and program reform were indicated for component skills of listening comprehension and reading comprehension.
TESOL Quarterly | 1981
Grant Henning; Samira M. Ghawaby; William Zaki Saadalla; Mohamed Ahmed El-Rifai; Ramzy Kamel Hannallah; Mohamed Said Mattar
The Egyptian Ministry of Education is responsible directly or indirectly for the development and administration of examinations for the measurement of English Language Achievement at all stages of preparatory and secondary education in Egypt. The final secondary school leaving examination, called the General Secondary Certificate Examination (GSCE), is perhaps the most prominent of these examinations. This final examination is designed to serve the dual function of certifying achievement in secondary school for the issuance of diplomas, and of predicting future success in university as a screening instrument of general proficiency. The examination is multidisciplinary, but the focus of the present study is exclusively on its English Language Component.