Frances A. Butler
University of California, Los Angeles
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Frances A. Butler.
Applied Measurement in Education | 2007
Alison L. Bailey; Frances A. Butler; Edynn Sato
Abstract Under Title III of the No Child Left Behind (NCLB) Act of 2001 (NCLB, 2001b) every state needs to show linkage between state content standards and state English language development standards as input to the development of state English proficiency tests. This article argues that Title III presents a unique opportunity to explore how different content standards can be linked on a common dimension. This article focuses on evaluating the degree to which content standards, such as English language arts and science, overlap with English language development standards in terms of implicit and explicit language demands placed on students. This helps ensure that language learners are exposed to types of language that will assist them in being successful in academic contexts.
Language Testing | 1999
Donald E. Powers; Mary Schedl; Susan Wilson Leung; Frances A. Butler
A communicative competence orientation was taken to study the validity of test-score inferences derived from the revised Test of Spoken English (TSE). To implement the approach, a sample of undergraduate students, primarily native speakers of English, provided a variety of reactions to, and judgements of, the test responses of a sample of TSE examinees. The TSE scores of these examinees, previously determined by official TSE raters, spanned the full range of TSE score levels. Undergraduate students were selected as ‘evaluators’ because they, more than most other groups, are likely to interact with TSE examinees, many of whom become teaching assistants. Student evaluations were captured by devising and administering a secondary listening test (SLT) to assess students’ understanding of TSE examinees’ speech, as represented by their taped responses to tasks on the TSE. The objective was to determine the degree to which official TSE scores are predictive of listeners’ ability to understand the messages conveyed by TSE examinees. Analyses revealed a strong association between TSE score levels and the judgements, reactions and understanding of listeners. This finding applied to all TSE tasks and to nearly all of the several different kinds of evaluations made by listeners. Along with other information, the evidence gathered here should help the TSE program meet professional standards for test validation. The procedures may also prove useful in future test-development efforts as a way of determining the difficulty of speaking tasks (and possibly writing tasks).
Language Testing | 1989
Gordon A. Hale; Charles W. Stansfield; Donald A. Rock; Marilyn M. Hicks; Frances A. Butler; John W. Oller
Four categories of multiple-choice (MC) cloze items were examined in relation to the TOEFL. The object was to assess the factor structure of the TOEFL and the potential of distinguishing MC cloze items aimed at reading comprehension (defined in terms of textual constraints ranging across clauses) as contrasted with knowledge of grammar (short-range surface syntax and morphology) or vocabulary. Since it is impossible in principle to distinguish such skills absolutely at any given point in a text, a compromise was to identify items whose difficulty seemed to be based primarily on one level of processing and secondarily on another. The pivotal category was reading comprehension. In all, 50 MC cloze items over three texts were used in four subsets: ones for which reading comprehension seemed to be the primary source of difficulty, and (1) grammar secondary or (2) vocabulary secondary (nine and 14 items respec tively) ; and ones for which either (3) grammar or (4) vocabulary was the main source of difficulty and reading comprehension secondary (15 and 12 items). Results were analysed separately for each of nine language groups, with a total of 11,290 subjects in all. Factor analysis of the TOEFL suggested two factors related to (a) the Listening Comprehension section, and (b) the nonlistening subsections. The data did not clearly reveal the expected differential relations between the MC cloze categories and subsections of the TOEFL, though tendencies were apparent and analyses on the whole revealed substantial reliability and validity for the MC cloze items.
Theory Into Practice | 1997
Frances A. Butler; Robin Stevens
oral language skills in grades K-6. It is exploratory in the sense that it is intended to stimulate thinking rather than provide answers or set guidelines. The discussion here is an outgrowth of issues being faced as educators look for alternative assessments in general and for innovative approaches to assessing language skills in particular. The position taken in this article is that evaluation works best when it is seen as a continuous, day-to-day, week-by-week process. Parents, teachers, administrators, and, most importantly, students themselves will benefit significantly more from assessment
Archive | 2003
Alison L. Bailey; Frances A. Butler
Language Testing | 2001
Frances A. Butler; Robin Stevens
Archive | 2005
Cse Report; Jamal Abedi; Alison L. Bailey; Frances A. Butler; Seth Leon; James Mirocha
US Department of Education | 2004
Frances A. Butler; Carol Lord; Robin Stevens; Malka Borrego; Alison L. Bailey
Center for Research on Evaluation Standards and Student Testing CRESST | 2004
Frances A. Butler; Alison L. Bailey; Robin Stevens; Becky H. Huang; Carol Lord
ETS Research Report Series | 1988
Gordon A. Hale; Charles W. Stansfield; Donald A. Rock; Marilyn M. Hicks; Frances A. Butler; John W. Oller