Glenn Fulcher
University of Surrey
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Glenn Fulcher.
Archive | 2007
Glenn Fulcher; Fred Davidson
Section A: Introduction A1. Introducing Validity A2. Classroom Assessment A3. Constructs and Models A4. Test Specifications and Designs A5. Writing Items and Tasks A6. Prototypes, Prototyping and Field Tests A7. Scoring Language Tests and Assessments A8. Administration and Training A9. Fairness, Ethics and Standards A10. Arguments and Evidence in Test Validation and Use Section B: Expansion B1. Construct Validity B2. Pedagogic Assessment B3. Investigating Communicative Competence B4. Optimal Specification Design B5. Washback B6. Researching Prototype Tasks B7. Scoring Performance Tests B8. Interlocutor Training and Behaviour B9. Ethics and Professionalism B10. Validity as Argument Section C: Exploration C1. Validity - An Exploration C2. Assessment in School Systems C3. What do Items Really Test? C4. Evolution in Action C5. To See a Test in a Grain of Sand C6. Analyzing Items and Tasks C7. Designing an Alternative Matrix C8. Administration and Alignment C9. In a Time Far Far Away C10. To Boldly Go. Glossary. References. Index
Archive | 2010
Glenn Fulcher
1. Testing and assessment in context 2. Standardised testing 3. Classroom assessment 4. Deciding what to test 5. Designing test specifications 6. Evaluating, prototyping and piloting 7. Scoring language tests 8. Aligning tests to standards 9. Test administration 10. Testing and teaching Epilogue Glossary
Language Assessment Quarterly | 2004
Glenn Fulcher
This commentary provides a critical and historical review of the Common European Framework of Reference: Learning, Teaching, Assessment (CEF). It is presented within the context of political and policy issues in Europe, and considers the role that the CEF is likely to play in that context, which is beyond the control of language testers. The dangers of institutionalization through political mandate are explored for test providers, test takers, and score users. It is argued that the CEF should be treated as just one of a range of tools for reporting test scores.
Language Testing | 1996
Glenn Fulcher
This article investigates some of the issues which surround the use of tasks in oral tests, with particular reference to the group discussion. This is done from the perspective of a group of students who were asked to attempt three oral tasks. Questionnaire techniques and retrospective reports were used to collect data from the students. The principle is that test-takers have a great deal to offer to the test researcher in making judgements about the value of the tests which they take (Brown, 1993). The issues surrounding task design and use are complex, and are currently being debated not only in language-testing circles but also in the fields of second language acquisition and discourse analysis. For this reason, this article will refer to discussions in all three areas to shed light on the selection of tasks for use in oral tests. Information from the statistical analysis of tests will also be presented. All views about tests and tasks used in tests, however much some authors might eschew theory or statistical analysis (Underhill, 1987), spring from inherent theoretical positions. These positions make predictions about test scores under particular conditions, and the results of analysis enable the researcher to assess whether a view can be supported by empirical evidence. Finally, the article will look at what is possibly one of the most problematic questions in proficiency testing: the generalizability of a test score given on one task to another task or tasks. It is arguably the case that, if this is not possible, there is no justification for proficiency testing.
Language Testing | 2011
Glenn Fulcher; Fred Davidson; Jenny Kemp
Rating scale design and development for testing speaking is generally conducted using one of two approaches: the measurement-driven approach or the performance data-driven approach. The measurement-driven approach prioritizes the ordering of descriptors onto a single scale. Meaning is derived from the scaling methodology and the agreement of trained judges as to the place of any descriptor on the scale. The performance data-driven approach, on the other hand, places primary value upon observations of language performance, and attempts to describe performance in sufficient detail to generate descriptors that bear a direct relationship with the original observations of language use. Meaning is derived from the link between performance and description. We argue that measurement-driven approaches generate impoverished descriptions of communication, while performance data-driven approaches have the potential to provide richer descriptions that offer sounder inferences from score meaning to performance in specified domains. With reference to original data and the literature on travel service encounters, we devise a new scoring instrument, a Performance Decision Tree (PDT). This instrument prioritizes what we term ‘performance effect’ by explicitly valuing and incorporating performance data from a specific communicative context. We argue that this avoids the reification of ordered scale descriptors which we find in measurement-driven scale construction for speaking tests.
Language Testing | 2003
Glenn Fulcher
There is no published material in the language testing literature on the process of, or good practice in, developing an interface for a computer-based language test. Nor do test development bodies make publicly available any information on how the interface for their computer-based language tests was developed. This article describes a three phase process model for interface design drawing on practices developed in the software industry, adapting them for computer-based language tests (CBTs). It describes good practice in initial design, emphasizes the importance of usability testing, and argues that only through following a principled approach to interface design can the threat of interface-related construct-irrelevant variance in test scores be avoided. The article also charts concurrent test development activities that take place during each phase of the design process. The model may be used in CBT project management, and it is argued that the publication of good interface design processes contributes to the mix of validity evidence presented to support the use of a CBT.
Language Testing | 2003
Glenn Fulcher; Rosina Márquez Reiter
The difficulty of speaking tasks has only recently become a topic of investigation in language testing. This has been prompted by work on discourse variability in second language acquisition (SLA) research, new classificatory systems for describing tasks, and the advent of statistical techniques that enable the prediction of task difficulty. This article reviews assumptions underlying approaches to research into speaking task difficulty and questions the view that test scores always vary with task conditions or discourse variation. A new approach to defining task difficulty in terms of the interaction between pragmatic task features and first language (L1) cultural background is offered, and the results of a study to investigate the impact of these variables on test scores are presented. The relevance for the generalizability of score meaning and the definition of constructs in speaking tests is discussed.
System | 2000
Glenn Fulcher
Abstract This article looks at the phenomenon of ‘communicative” language testing as it emerged in the late 1970s and early 1980s as a reaction against tests constructed of multiple choice items and the perceived over-emphasis of reliability. Lado in particular became a target for communicative testers. It is argued that many of the concerns of the communicative movement had already been addressed outside the United Kingdom, and that Lado was done an injustice. Nevertheless, the jargon of the communicative testing movement, however imprecise it may have been, has impacted upon the ways in which language testers approach problems today. The legacy of the communicative movement is traced from its first formulation, through present conundrums, to tomorrows research questions.
Language Testing | 1997
Glenn Fulcher
This report describes a reliability and validity study of the placement test which is used at the University of Surrey as a means of identifying students who may require English language support whilst studying at undergraduate or postgraduate levels. The English Language Institute is charged with testing all incoming stu dents, irrespective of their primary language or subject specialization. Fair and accurate assessment of student abilities, and referring individuals to appropriate language support courses (the in-sessional programme), is an essential support service to all academic departments. The goal of placement testing is to reduce to an absolute minimum the number of students who may face problems or even fail their academic degrees because of poor language ability or study skills. This study looks at the administrative and logistic constraints upon what can be done, and assesses the usefulness of the placement test developed within this context.
System | 1997
Glenn Fulcher
Abstract Text difficulty, or text “accessibility” is an important but much neglected topic in Applied Linguistics. Establishing text difficulty is relevant to the teacher and syllabus designer who wish to select appropriate materials for learners at a variety of ability levels. It is also critical to test developers in selecting reading texts at appropriate levels for inclusion into the reading sub-tests of examinations. Writers of texts for various audiences also need guidance related to the range of factors which make texts more or less accessible. In all these cases, however, decisions are still made very much on intuitive grounds. This research specifically addressed the concerns of text writers, but the findings are still relevant to the first two concerns. The research involved the analysis of a corpus of texts, and shows that factors which make the texts difficult, or less accessible, include poor linguistic structure, contextual structure, conceptual structure, and unclear operationalisation of the reader-writer relationship. It is argued that factors which are not considered in traditional readability formulae are more important determinants of text accessibility.