J. Charles Alderson
Lancaster University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by J. Charles Alderson.
TESOL Quarterly | 1979
J. Charles Alderson
The cloze test has received considerable attention in recent years from testers and teachers of English as a foreign language, and is becoming more widely used in language tests, both in the classroom and in standardized tests. However, most of the research has been carried out with native speakers of English and the results do not produce clear-cut evidence that the cloze test is a valid test of reading comprehension. The article reports on a series of experiments carried out on the cloze procedure where the variables of text difficulty, scoring procedure and deletion frequency were systematically varied and that variation examined for its effect on the relationship of the cloze test to measures of proficiency in English as a Foreign Language. Previous assumptions about what the cloze procedure tests are questioned and it is suggested that cloze tests are not suitable tests of higher-order language skills, but can provide a measure of lower-order core proficiency. Testers and teachers should not assume that the procedure will produce automatically valid tests of proficiency in English as a Foreign Language.
Language Testing | 2005
J. Charles Alderson; Ari Huhta
DIALANG is an on-line language assessment system, which contains tests in 14 European languages and is based on the Common European Framework of Reference (CEFR). It is the first major testing system that is oriented towards diagnosing language skills and providing feedback to users rather than certifying their proficiency. This article describes the contents of Version 1 of DIALANG and the way in which the system works. This is followed by an account of the development of DIALANG tests and of the pilot testing and standard setting procedures. The results of the first analyses of items and self-assessment statements, and of the standard setting procedures, are reported. The article focuses on the results for English, but findings for some other languages are also touched upon.
Language Teaching | 2001
J. Charles Alderson; Jayanti Banerjee
In Part 1 of this two-part review article (Alderson & Banerjee, 2001), we first addressed issues of washback, ethics, politics and standards. After a discussion of trends in testing on a national level and in testing for specific purposes, we surveyed developments in computer-based testing and then finally examined self-assessment, alternative assessment and the assessment of young learners. In this second part, we begin by discussing recent theories of construct validity and the theories of language use that help define the constructs that we wish to measure through language tests. The main sections of the second part concentrate on summarising recent research into the constructs themselves, in turn addressing reading, listening, grammatical and lexical abilities, speaking and writing. Finally we discuss a number of outstanding issues in the field.
Language Assessment Quarterly | 2006
J. Charles Alderson; Neus Figueras; Henk Kuijper; Guenter Nold; Sauli Takala; Claire Tardieu
The Common European Framework of Reference (CEFR) is intended as a reference document for language education including assessment. This article describes a project that investigated whether the CEFR can help test developers construct reading and listening tests based on CEFR levels. If the CEFR scales together with the detailed description of language use contained in the CEFR are not sufficient to guide test development at these various levels, then what is needed to develop such an instrument? The project methodology involved gathering expert judgments on the usability of the CEFR for test construction, identifying what might be missing from the CEFR, developing a frame for analysis of tests and specifications, and examining a range of existing test specifications and guidelines to item writers and sample test tasks for different languages at the 6 levels of the CEFR. Outcomes included a critical review of the CEFR, a set of compilations of CEFR scales and of test specifications at the different CEFR levels, and a series of frameworks or classification systems, which led to a Web-mounted instrument known as the Dutch CEFR Grid. Interanalyst agreement in using the Grid for analyzing test tasks was quite promising, but the Grids need to be improved by training and discussion before decisions on test task levels are made. The article concludes, however, that identifying separate CEFR levels is at least as much an empirical matter as it is a question of test content, either determined by test specifications or identified by any content classification system or grid.
System | 2000
J. Charles Alderson
As developments in information technology have moved apace, and both hardware and software have become more powerful and cheaper, the long-prophesied use of IT for language testing is finally coming about. The Test of English as a Foreign Language (TOEFL) is mounted on computer. CD ROM-based versions of University of Cambridge Local Examinations Syndicate tests are available, and the Internet is beginning to be used to deliver language tests. This paper reviews the advantages and disadvantages of computer-based language tests, explores in detail developments in Internet-based testing using the examples of TOEFL and DIALANG — an innovative on-line suite of diagnostic tests and self-assessment procedures in 14 European languages — and outlines a research agenda for the next decade.
Language Testing | 1994
Dianne Wall; Caroline Clapham; J. Charles Alderson
The nature and validation of placement tests is rarely discussed in the language testing literature, yet placement tests are probably one of the commonest forms of tests used within institutions which are not designed by individual teachers and which are used to make decisions across the institution rather than within individual classes. Questions to be asked in the validation and evaluation of any placement test include the following: Does the placement test correctly identify those students who most need English and study skills classes? Do the students who take the test feel that their language has been accurately measured? Is the content of the test appro priate to the uses made of the tests? Is the test reliable? This paper reports on an attempt to validate an institutional placement test at Lancaster University. After presenting the results of the study, the paper comments both on the validity and reliability of the test, and on the wider issues that influence how validation studies of placement tests can be carried out.
Annual Review of Applied Linguistics | 2009
J. Charles Alderson
The language of international aviation communication is English, but numerous aviation incidents and accidents have involved miscommunication between pilots and air traffic controllers, many of whom are not native speakers of the language. In 2004 the International Civil Aviation Organization (ICAO) published a set of Language Proficiency Requirements and a Proficiency Rating Scale, and by 5 March 2008, air traffic controllers and pilots were required by the ICAO to have a certificate attesting to their proficiency in the language used for international aeronautical communication. Although some organizations made efforts to produce tests by the deadline, in the event an implementation period was allowed, with a new deadline of March 2011. This article describes a number of surveys of tests of aviation English, the implementation of the ICAO requirements, and the rating scales. It concludes that many of the assessment procedures appear not to meet international professional standards for language tests, the implementation of the language assessment policy is inadequate, and much more careful and close monitoring is needed of the quality of the tests and assessment procedures required by the policy.
Language Testing | 2010
J. Charles Alderson
The Lancaster Language Testing Research Group was commissioned in 2006 by the European Organisation for the Safety of Air Navigation (Eurocontrol) to conduct a validation study of the development of a test called ELPAC (English Language Proficiency for Aeronautical Communication), intended to assess the language proficiency of air traffic controllers. As part of that study, Internet searches for other tests of air traffic control identified a number of tests but found very little evidence available to attest to the quality of these tests. Therefore, it was decided to conduct an independent survey of tests of aviation English, since the consequences of inadequate language tests being used in licensing pilots, air traffic controllers and other aviation personnel are potentially very serious. A questionnaire was developed, based on the Guidelines for Good Practice of the European Association for Language Testing and Assessment (EALTA, 2006), and sent to numerous organizations whose tests were thought to be used for licensure of pilots and air traffic controllers. Twenty-two responses were received, which varied considerably in quantity and quality. This probably reflects a variation in the quality of the tests, in the availability of evidence to support claims of quality, and in low awareness of appropriate procedures for test development, maintenance and validation. We conclude that we can have little confidence in the meaningfulness, reliability, and validity of several of the aviation language tests currently available for licensure. We therefore recommend that the quality of language tests used in aviation be monitored to ensure they follow accepted professional standards for language tests and assessment procedures.
Language Testing | 2009
J. Charles Alderson
Address for correspondence: J. Charles Alderson, Department of Linguistics and English Language, Lancaster University, Lancaster LA1 4YT, UK; email: [email protected] 1 The TSE will continue to be available separately as long as the paper-based version of TOEFL is available, but TOEFL iBT now incorporates a test of speaking ability. 2 The TWE became a mandatory part of the TOEFL paper-based test in 1998 with the introduction of the TOEFL computer-based test, which included a writing task. The TOEFL iBT includes two writing tasks.
Language Testing | 1988
J. Charles Alderson
This paper reports on work in progress on an international project to revise the English Language Testing Service (ELTS) test, currently jointly produced and administered by the British Council and the University of Cambridge Local Examinations Syndicate. The project is trying out what it believes are somewhat new approaches to the content validation of the revised test, and both the rationale for this and the details of the procedures for test validation are set out here. It remains to be seen, however, whether such procedures will result in improved construct, concurrent or predictive validity.