Carol A. Chapelle
Iowa State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Carol A. Chapelle.
Archive | 2011
Carol A. Chapelle; Mary K. Enright; Joan Jamieson
Preface Acknowledgments List of Contributors Chapter 1. Test Score Interpretation and Use Carol A. Chapelle, Mary K. Enright, and Joan M. Jamieson Chapter 2. The Evolution of the TOEFL Carol A. Taylor and Paul Angelis Chapter 3. Frameworks for a New TOEFL Joan M. Jamieson, Daniel Eignor, William Grabe, and Antony John Kunnan Chapter 4 .Prototyping New Assessment Tasks Mary K. Enright, Brent Bridgeman, Daniel Eignor, Robert N. Kantor, Pamela Mollaun, Susan Nissan, Donald E. Powers, and Mary Schedl Chapter 5 Prototyping Measures of Listening, Reading, Speaking, and Writing Mary K. Enright, Brent Bridgeman, Daniel Eignor, Yong-Won Lee, and Donald E. Powers Chapter 6. Prototyping a New Test Kristen Huff, Donald E. Powers, Robert N. Kantor, Pamela Mollaun, Susan Nissan, and Mary Schedl Chapter 7. Finalizing the Test Blueprint Mari Pearlman Chapter 8. A Final Analysis Lin Wang, Daniel Eignor, and Mary K. Enright Chapter 9. The TOEFL Validity Argument Carol A. Chapelle Appendix A. 1995 Working Assumptions That Underlie an Initial TOEFL 2000 Design Framework Appendix B. Summary of 1995 Research Recommendations Appendix C. Timeline of TOEFL Origins and the New TOEFL Project-Key Efforts and Decisions
Language Testing | 2001
John Read; Carol A. Chapelle
Vocabulary tests are used for a wide range of instructional and research purposes but we lack a comprehensive basis for evaluating the current instruments or developing new lexical measures for the future. This article presents a framework that takes as its starting point an analysis of test purpose and then shows how purpose can be systematically related to test design. The link between the two is based on three considerations which derive from Messick’s (1989) validation theory: construct definition, performance summary and reporting, and test presentation. The components of the framework are illustrated throughout by reference to eight well-known vocabulary measures; for each one there is a description of its design and an analysis of its purpose. It is argued that the way forward for vocabulary assessment is to take account of test purposes in the design and validation of tests, as well as considering an interactionalist approach to construct definition. This means that a vocabulary test should require learners to perform tasks under contextual constraints that are relevant to the inferences to be made about their lexical ability.
ACM Sigapl Apl Quote Quad | 1999
Carol A. Chapelle
All previous papers on language assessment in the Annual Review of Applied Linguistics make explicit reference to validity . These reviews, like other work on language testing, use the term to refer to the quality or acceptability of a test. Beneath the apparent stability and clarity of the term, however, its meaning and scope have shifted over the past years. Given the significance of changes in the conception of validity, the time is ideal to probe its meaning for language assessment.
TESOL Quarterly | 1986
Carol A. Chapelle; Joan Jamieson
This article reports the results of a study of the effectiveness of computer-assisted language learning (CALL) in the acquisition of English as a second language by Arabic- and Spanish-speaking students in an intensive program. The study also examined two student variables-time spent using and attitude toward the CALL lessons-as well as four cognitive/affective characteristics--field independence, ambiguity tolerance, motivational intensity, and English-class anxiety. English proficiency was measured by the TOEFL and an oral test of communicative competence. Results indicated that the use of CALL lessons predicted no variance on the criterion measures beyond what could be predicted by the cognitive/affective variables. In addition, it was found that time spent using and attitude toward CALL were significantly related to field independence and motivational intensity. These results indicate that (a) certain types of learners may be better suited to some CALL materials than other students and (b) it is necessary to consider many learner variables when researching the effectiveness of CALL.
ReCALL | 2013
Maja Grgurović; Carol A. Chapelle; Mack C. Shelley
With the aim of summarizing years of research comparing pedagogies for second/foreign language teaching supported with computer technology and pedagogy not-supported by computer technology, a meta-analysis was conducted of empirical research investigating language outcomes. Thirty-seven studies yielding 52 effect sizes were included, following a search of literature from 1970 to 2006 and screening of studies based on stated criteria. The differences in research designs required subdivision of studies, but overall results favored the technology-supported pedagogy, with a small, but positive and statistically significant effect size. Second/foreign language instruction supported by computer technology was found to be at least as effective as instruction without technology, and in studies using rigorous research designs the CALL groups outperformed the non-CALL groups. The analyses of instructional conditions, characteristics of participants, and conditions of the research design did not provide reliable results because of the small number of effect sizes representing each group. The meta-analysis results provide an empirically-based response to the questions of whether or not technology-supported pedagogies enhance language learning, and the process of conducting the meta-analysis pointed to areas in research methodology that would benefit from attention in future research.
TESOL Quarterly | 2003
Carol A. Chapelle; Patricia A. Duff
0 Research practices evolve as new issues and questions emerge and as new methods and tools are developed to address them. In view of the changing landscape of research in the TESOL profession, TESOL Quarterlys Editorial Advisory Board regularly reexamines the guidelines for research provided for contributors to keep the guidelines up-to-date and reflective of the agreed-on conventions for undertaking and reporting research. Since 1992 TESOL Quarterly has included guidelines for statistical research at the back of each issue to guide the growing number of contributors conducting such research. In 1994, the increase in qualitative studies submitted to TESOL Quarterly prompted the Editorial Advisory Board to include a set of qualitative research guidelines for contributors as well.
Second Language Research | 1994
Carol A. Chapelle
Second language (L2) researchers (Singleton and Little, 1991) have sug gested that C-tests, developed as norm-referenced measures for proficiency and placement testing (Klein-Braley, 1985), can be used in L2 vocabulary research. This article illustrates how researchers can bring to bear essentials of measurement theory on L2 research by weighing validity justifications pertaining to use of the C-test method for vocabulary assessment in L2 research. Validity is defined using the predominant framework from current measurement theory (Messick, 1989) and its relevance for L2 research is explained. The cornerstone of the definition is construct validity, which requires a definition of the construct to be measured - interlanguage vocabulary (i.e., vocabulary ability). A theoretical definition of vocabulary ability is presented and used to consider justifications for and against interpreting C-test performance as indicative of vocabulary ability. On the basis of evidence concerning construct validity and utility as well as the consequences of interpretations, the potentials and limitations of the C-test method for L2 vocabulary research are identified.
ReCALL | 2004
Joan Jamieson; Carol A. Chapelle; Sherry Preiss
CALL evaluation might ideally draw on principles from fields such as second language acquisition, language pedagogy, instructional design, and testing and measurement in order to make judgments about criteria such as elaborated input, feedback, collaborative learning, authentic tasks, navigation, screen design, reliability, validity, impact, and practicality. In this study, a subset of criteria were used to evaluate the design of English as a second or foreign language (ESL/EFL) online courses and assessments, Longman English Online. This article illustrates how a set of principles suggested evaluation criteria which, in turn, suggested particular variables for the instructional design; these variables, again in turn, suggested potential operationalizations which could be implemented as task features in CALL materials. Results of the judgmental evaluation indicated that most of the criteria were met, although some better than others.
ACM Sigapl Apl Quote Quad | 2007
Carol A. Chapelle
Computer technology provides learners with new and varied options for language learning through interactive tasks delivered through CD-ROMs, Web pages, and communications software on the Internet. Researchers need to reconsider any approach to second language acquisition (SLA) concerned with explaining how language development is prompted by exposure to the target language in view of the dramatic differences in language experience learners engage in due to computer technology. Virtually all theories are concerned with the role of linguistic input or the environment (VanPatten & Williams, 2007), and therefore technology needs to be considered.
TESOL Quarterly | 1990
Carol A. Chapelle
Understanding how the speed, power, and flexibility of computers can facilitate second language acquisition is an intriguing challenge faced by instructors, researchers, and theorists. Progress in this area, however, does not appear to be forthcoming from current research on computer-assisted language learning (CALL), which suffers from the same limitations as early research on classroom instruction: Little detail is provided to describe the interaction among participants during instruction (Long, 1980). Moreover, descriptions of CALL activities included in reported research are not empirically based: They fail to describe what subjects actually do while working with CALL. A third problem is that the terms used to describe CALL activities have been developed specifically for that purpose, and are therefore not comparable to those used for classroom activities. At the same time, these descriptors are not sufficiently uniform and formally stated to allow specific comparisons among CALL activities. Toward a solution to these problems, this paper proposes a discourse analysis of student-computer interaction enabled by viewing the student and the computer as two participants in a dialogue. It argues that the discourse analysis system of classroom interaction developed by Sinclair and Coulthard (1975) provides the necessary elements and structures to describe CALL discourse, analyze data from student-computer interaction, and compare CALL activities with other (classroom) activities.