Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alastair Pollitt is active.

Publication


Featured researches published by Alastair Pollitt.


Assessment in Education: Principles, Policy & Practice | 2012

The Method of Adaptive Comparative Judgement.

Alastair Pollitt

Adaptive Comparative Judgement (ACJ) is a modification of Thurstone’s method of comparative judgement that exploits the power of adaptivity, but in scoring rather than testing. Professional judgement by teachers replaces the marking of tests; a judge is asked to compare the work of two students and simply to decide which of them is the better. From many such comparisons a measurement scale is created showing the relative quality of students’ work; this can then be referenced in familiar ways to generate test results. The judges are asked only to make a valid decision about quality, yet ACJ achieves extremely high levels of reliability, often considerably higher than practicable operational marking can achieve. It therefore offers a radical alternative to the pursuit of reliability through detailed marking schemes. ACJ is clearly appropriate for performances like writing or art, and for complex portfolios or reports, but may be useful in other contexts too. ACJ offers a new way to involve all teachers in summative as well as formative assessment. The model provides strong statistical control to ensure quality assessment for individual students. This paper describes the theoretical basis of ACJ, and illustrates it with outcomes from some of our trials.


Assessment in Education: Principles, Policy & Practice | 2007

Improving the Quality of Contextualized Questions: An Experimental Investigation of Focus.

Ayesha Ahmed; Alastair Pollitt

Setting examination questions in real‐world contexts is widespread. However, when students are reading contextualized questions there is a risk that the cognitive processes provoked by the context can interfere with their understanding of the concepts in the question. Validity may then be compromised. We introduce the concept of focus: a question in a given context is focused to the extent that it addresses the aspects of the context that will be most salient in real life for the students being tested. A more focused context will then help activate relevant concepts, rather than interfering with comprehension and reasoning. In this study, the contexts of questions from a science test for 14‐year‐olds in England were manipulated to increase or decrease the amount of focus. In every instance more focused questions proved better than less focused ones. With additional evidence from interview protocols, we also consider the effects of context focus on the question answering process.


Assessment in Education: Principles, Policy & Practice | 2010

The Support Model for Interactive Assessment

Ayesha Ahmed; Alastair Pollitt

The two most common models for assessment involve measuring how well students perform on a task (the quality model), and how difficult a task students can succeed on (the difficulty model). By exploiting the interactive potential of computers we may be able to use a third model: measuring how much help a student needs to complete a task. We assume that every student can complete it, but some need more support than others. This kind of tailored support will give students a positive experience of assessment, and a learning experience, while allowing us to differentiate them by ability. The computer can offer several kinds of support, such as help with understanding a question, hints on the meanings of key concepts, and examples or analogies. A further type of support has particular importance for test validity: the computer can probe students for a deeper explanation than they have so far given. In subjects like geography or science, markers often would like to ask ‘yes, but why?’, suspecting that students understand more than they have written. We describe a series of studies in which students were given a high level task as an oral interview and then as an interactive computerised assessment with varying types of support. Implications of the support model for future modes of assessment are discussed.


Assessment in Education: Principles, Policy & Practice | 2011

Improving marking quality through a taxonomy of mark schemes

Ayesha Ahmed; Alastair Pollitt

At the heart of most assessments lies a set of questions, and those who write them must achieve two things. Not only must they ensure that each question elicits the kind of performance that shows how ‘good’ pupils are at the subject, but they must also ensure that each mark scheme gives more marks to those who are ‘better’ at it. We outline a general taxonomy of mark schemes applicable across different subjects and question types, with the aim of improving the quality of marking in many different national contexts. We begin by arguing for the acceptance by all concerned of an Importance Statement which expresses what matters most in teaching, learning and assessing a particular subject. Based on extensive studies of exam papers in the UK, and utilising the experience of many examiners, we have developed three specific taxonomies for the kinds of mark scheme that are used with different kinds of questions. The taxonomies show how existing mark schemes vary in their effectiveness, and how they can be improved so that better performances get more marks.


Archive | 1999

A New Model of the Question Answering Process

Alastair Pollitt; Ayesha Ahmed


Educational Research | 2008

Tales of the expected: the influence of students' expectations on question validity and implications for writing exam questions

Victoria Crisp; Ezekiel Sweiry; Ayesha Ahmed; Alastair Pollitt


Archive | 2007

THE DEMANDS OF EXAMINATION SYLLABUSES AND QUESTION PAPERS

Alastair Pollitt; Ayesha Ahmed; Victoria Crisp


Archive | 1999

Curriculum Demands and Question Difficulty

Ayesha Ahmed; Alastair Pollitt


Archive | 1998

The development of a tool for gauging the demands of GCSE and A Level exam questions.

Sarah A. Hughes; Alastair Pollitt; Ayesha Ahmed


Archive | 2002

Tales of the Expected: The Influence of Students' Expectations on Exam Validity

Ezekiel Sweiry; Victoria Crisp; Ayesha Ahmed; Alastair Pollitt

Collaboration


Dive into the Alastair Pollitt's collaboration.

Top Co-Authors

Avatar

Ayesha Ahmed

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gill Elliott

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge