Joan Jamieson
Northern Arizona University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Joan Jamieson.
Archive | 2011
Carol A. Chapelle; Mary K. Enright; Joan Jamieson
Preface Acknowledgments List of Contributors Chapter 1. Test Score Interpretation and Use Carol A. Chapelle, Mary K. Enright, and Joan M. Jamieson Chapter 2. The Evolution of the TOEFL Carol A. Taylor and Paul Angelis Chapter 3. Frameworks for a New TOEFL Joan M. Jamieson, Daniel Eignor, William Grabe, and Antony John Kunnan Chapter 4 .Prototyping New Assessment Tasks Mary K. Enright, Brent Bridgeman, Daniel Eignor, Robert N. Kantor, Pamela Mollaun, Susan Nissan, Donald E. Powers, and Mary Schedl Chapter 5 Prototyping Measures of Listening, Reading, Speaking, and Writing Mary K. Enright, Brent Bridgeman, Daniel Eignor, Yong-Won Lee, and Donald E. Powers Chapter 6. Prototyping a New Test Kristen Huff, Donald E. Powers, Robert N. Kantor, Pamela Mollaun, Susan Nissan, and Mary Schedl Chapter 7. Finalizing the Test Blueprint Mari Pearlman Chapter 8. A Final Analysis Lin Wang, Daniel Eignor, and Mary K. Enright Chapter 9. The TOEFL Validity Argument Carol A. Chapelle Appendix A. 1995 Working Assumptions That Underlie an Initial TOEFL 2000 Design Framework Appendix B. Summary of 1995 Research Recommendations Appendix C. Timeline of TOEFL Origins and the New TOEFL Project-Key Efforts and Decisions
TESOL Quarterly | 1986
Carol A. Chapelle; Joan Jamieson
This article reports the results of a study of the effectiveness of computer-assisted language learning (CALL) in the acquisition of English as a second language by Arabic- and Spanish-speaking students in an intensive program. The study also examined two student variables-time spent using and attitude toward the CALL lessons-as well as four cognitive/affective characteristics--field independence, ambiguity tolerance, motivational intensity, and English-class anxiety. English proficiency was measured by the TOEFL and an oral test of communicative competence. Results indicated that the use of CALL lessons predicted no variance on the criterion measures beyond what could be predicted by the cognitive/affective variables. In addition, it was found that time spent using and attitude toward CALL were significantly related to field independence and motivational intensity. These results indicate that (a) certain types of learners may be better suited to some CALL materials than other students and (b) it is necessary to consider many learner variables when researching the effectiveness of CALL.
ReCALL | 2004
Joan Jamieson; Carol A. Chapelle; Sherry Preiss
CALL evaluation might ideally draw on principles from fields such as second language acquisition, language pedagogy, instructional design, and testing and measurement in order to make judgments about criteria such as elaborated input, feedback, collaborative learning, authentic tasks, navigation, screen design, reliability, validity, impact, and practicality. In this study, a subset of criteria were used to evaluate the design of English as a second or foreign language (ESL/EFL) online courses and assessments, Longman English Online. This article illustrates how a set of principles suggested evaluation criteria which, in turn, suggested particular variables for the instructional design; these variables, again in turn, suggested potential operationalizations which could be implemented as task features in CALL materials. Results of the judgmental evaluation indicated that most of the criteria were met, although some better than others.
Language Testing | 2003
Carol A. Chapelle; Joan Jamieson; Volker Hegelheimer
The web offers new opportunities to realize some of the current ideals for interactive language assessment by providing learners information about their language ability at their convenience. If such tests are to be trusted to provide learners with information that might help to improve their language ability, the tests need to undergo validation processes, but validation theory does not offer specific guidance about what should be included in a validity argument. Conventional wisdom suggests that low-stakes tests require less rigorous validation than high-stakes tests, but what are the factors that affect decisions about the validation process for either? Attempting to make these contributing factors explicit, this article examines the ways in which the purpose of a low-stakes web-based ESL (English as a second language) test guided its design and the validation process. The validity argument resulting from the first phase of the validation process is illustrated.
System | 1983
Carol Chapelle; Joan Jamieson
Abstract In the area of language study, many educators are becoming interested in the teaching potential of Computer-Assisted Instruction (CAI). The purpose of this paper is to enumerate some of the techniques and lesson types that are used in foreign language (FL) courseware on the PLATO IV system at the University of Illinois at Urbana-Champaign. First, the PLATO IV system is outlined in terms of technical capabilities and its integration into the curricula of language classes. Then, some FL courseware is described to exemplify many aspects of reading, writing and listening which can be practiced by students on a computer. It is hoped that these examples can serve as a point of departure for development of future FL courseware.
Language Testing | 2017
Geoffrey T. LaFlair; Daniel Richard Isbell; L. D. Nicolas May; Maria Nelly Gutierrez Arvizu; Joan Jamieson
Language programs need multiple test forms for secure administrations and effective placement decisions, but can they have confidence that scores on alternate test forms have the same meaning? In large-scale testing programs, various equating methods are available to ensure the comparability of forms. The choice of equating method is informed by estimates of quality, namely the method with the least error as defined by random error, systematic error, and total error. This study compared seven different equating methods to no equating – mean, linear Levine, linear Tucker, chained equipercentile, circle-arc, nominal weights mean, and synthetic. A non-equivalent groups anchor test (NEAT) design was used to compare two listening and reading test forms based on small samples (one with 173 test takers the other, 88) at a university’s English for Academic Purposes (EAP) program. The equating methods were evaluated based on the amount of error they introduced and their practical effects on placement decisions. It was found that two types of error (systematic and total) could not be reliably computed owing to the lack of an adequate criterion; consequently, only random error was compared. Among the seven methods, the circle-arc method introduced the least random error as estimated by the standard error of equating (SEE). Classification decisions made using the seven methods differed from no equating; all methods indicated that fewer students were ready for university placement. Although interpretations regarding the best equating method could not be made, circle-arc equating reduced the amount of random error in scores, had reportedly low bias in other studies, accounted for form and person differences, and was relatively easy to compute. It was chosen as the method to pilot in an operational setting.
Educational Measurement: Issues and Practice | 2010
Carol A. Chapelle; Mary K. Enright; Joan Jamieson
Language Learning | 1999
Carol Taylor; Irwin Kirsch; Joan Jamieson; Daniel R. Eignor
Language Learning | 1987
Joan Jamieson; Carol A. Chapelle
The Modern Language Journal | 1992
Joan Jamieson