Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Victoria Crisp is active.

Publication


Featured researches published by Victoria Crisp.


Educational Research | 2006

Can a picture ruin a thousand words? The effects of visual resources in exam questions

Victoria Crisp; Ezekiel Sweiry

Background When an exam question is read, a mental representation of the task is formed in each students mind. This processing can be affected by features such as visual resources (e.g. pictures, diagrams, photographs, tables), which can come to dominate the mental representation due to their salience. Purpose The aim of this research was to investigate the effects of visual resources in exam questions and, in particular, to investigate how and when students use images and whether subtle changes to these salient physical features can affect whether a question is understood and answered in the way intended by the question-setters. Sample The participants were 525 16-year-old students, with a range of ability, in four secondaryschools. Design and methods Experimental test papers were constructed including six questions based on past examination questions and involving graphical elements. For five of the six questions, two versions were designed in order to investigate the effects of changes to visual resources on processing and responses. A sample of the students were interviewed afterwards. Results Where two versions of a question were trialled in parallel, the differences in the visual resources significantly affected marks for one question and had smaller effects on marks and the nature of answers with some of the others. There were mixed views from students over whether a visual resource that is not strictly necessary should be used. Some considered it reassuring, whilst others deemed it unnecessary. Evidence in the literature suggests that caution may be needed since there is a risk that some students may pay too much attention to the image. Findings from one question (question 6) indicated that visuals can increase the likelihood of students making unhelpful interpretations of a question. Students were seen to have sensible expectations regarding when to use information from a visual resource and what is important in an illustration. In addition, more use tended to be made of a technical diagram (in question 12) in comparison to pictures or sketches, and it was found that if an image provides a clue to an answer, this may be used in preference to information in the text.  Evidence regarding the use that students made of a table (question 1) indicated that the data in the table cells were given more attention than some of the preceding text and text in a header. This might apply similarly to other resources like graphs and charts. Conclusions It is important to ensure that the inclusion of a visual resource is carefully considered and appropriately designed. If a visual resource is not strictly needed in a question, the writer will need to balance the advantages and disadvantages. Authors should also consider whether and how students are likely to use or be affected by the particular visual resource chosen. The findings and suggested implications of this study are most applicable to high-stakes testing but may also be useful to those preparing school textbooks and to teachers in their preparation of classroom materials.


Cambridge Journal of Education | 2008

Exploring the nature of examiner thinking during the process of examination marking

Victoria Crisp

Despite the abundant literature on educational measurement there has been relatively little work investigating the psychological processes underpinning marking. This research investigated the processes involved when examiners mark examination responses. Scripts from two geography A‐level examinations were used: one requiring short and medium length responses and one requiring essays. Six examiners marked 50 scripts from each of the two examinations and were later asked to think aloud whilst marking four to six scripts from each examination. Coding and analyses identified different types of reading behaviours, social, emotional and personal reactions and provided insight into the nature of evaluations. Some differences between examiners and between question types were identified. Analysis of associations between marker behaviours and marker agreement suggested that positive evaluations, comparisons and thorough reading were important to avoiding severity. Potential implications for marker training and for the impact of technological changes to assessment systems are discussed.


Computers in Education | 2008

The development of a formative scenario-based computer assisted assessment tool in psychology for teachers: The PePCAA project

Victoria Crisp; Christine Ward

Formative computer assisted assessment has become increasingly attractive in Higher Education where providing useful feedback to large numbers of students can be difficult. However, the nature of such assessments has often been limited to objective questions such as multiple-choice. This paper reports on the development and initial trialling of a more innovative, formative use of computer assisted assessment in a Higher Education context. The European funded PePCAA (Pedagogical Psychology Computer Assisted Assessment) project developed a series of scenario-based computer-delivered formative assessments of pedagogical psychology for teachers and trainee teachers, using a range of software features, including the addition of confidence measurement. The project had a two-fold aim: to provide a tool to improve understanding of pedagogical psychology and to explore the potential of more innovative techniques of computer assisted assessment to motivate students and to assess deeper learning. The combination of computer-delivered formative assessment with innovative question styles and confidence ratings is believed to be unique for pedagogical psychology. Scenarios were based on realistic classroom situations and focused on problem solving or on utilising best practice. The PePCAA Learning Assessment Circle (PLAC) provided a framework for indexing the kinds of processes required of users. In the UK, small scale trialling involved a total of 23 teacher trainees such that each assessment was attempted by about seven participants. Participants completed evaluation questionnaires after each assessment. Responses from learners indicated that the UK scenarios were generally very well received and had at least partly achieved the aim of stimulating deeper learning. Transfer of assessments between countries proved more difficult than expected. The next stage of development should be to conduct a larger pilot, thus allowing full investigation of the reliability and validity of the assessments. There is also scope for further development of the PePCAA approach and for its application in other subjects.


Oxford Review of Education | 2010

Towards a model of the judgement processes involved in examination marking

Victoria Crisp

The judgement processes underpinning examination marking are central to achieving fair assessment but are under‐researched. This article draws on existing literature and uses additional analysis of data collected in a previous article ‘Exploring the nature of examiner thinking during the process of examination marking’, to start to piece together the puzzle of the judgement processes involved in marking. In that study six experienced examiners were asked to ‘think aloud’ whilst marking a number of scripts from each of two geography examinations and the resulting ‘verbal protocols’ were analysed. The analysis identified and discussed the nature of examiner behaviours and reactions in relation to a number of themes: reading and comprehension behaviours; social, emotional and personal reactions; and evaluative behaviours. Drawing on these findings and by conducting further qualitative analyses of the sequences of behaviours, the article develops a tentative five‐phase model of the marking process and uses this to evaluate and link relevant theories from psychology. Thus, the paper builds on existing understandings of judgements in assessment to further our comprehension of the cognitive processes involved in exam marking. The findings may inform marker training and decisions regarding changes to assessment systems.


Assessment in Education: Principles, Policy & Practice | 2013

Criteria, comparison and past experiences: how do teachers make judgements when marking coursework?

Victoria Crisp

The process by which an assessor evaluates a piece of student work against a set of marking criteria is somewhat hidden and potentially complex. This judgement process is under-researched, particularly in contexts where teachers (rather than trained examiners) conduct the assessment and in contexts involving extended pieces of work. This paper reports research which explored the judgement processes involved when teachers mark General Certificate of Secondary Education (GCSE) coursework. Thirteen teachers across three subjects were interviewed about aspects of their marking judgements. In addition, 378 teachers across a wider range of subjects completed an associated questionnaire. The data provide insights into the way that criteria are used, the role that comparison plays in the process and the importance of various professional experiences to making assessment judgements. Findings are likely to generalise to ‘controlled assessments’ which have replaced coursework in the GCSE.


Assessment in Education: Principles, Policy & Practice | 2012

A Framework for Evidencing Assessment Validity in Large-Scale, High-Stakes International Examinations.

Stuart Shaw; Victoria Crisp; Nat Johnson

It is important for educational assessment bodies to demonstrate how they are seeking to meet the demands of validity. The approach to validity taken here assumes a ‘consequentialist’ view where the appropriacy of the inferences made on the basis of assessment results is seen as central. This paper describes the development of a systematic approach to the collection of evidence that can support claims about validity for general qualifications. An operational framework was developed drawing on Kane (2006). The framework involves a list of inferences to be justified as indicated by a number of linked validation questions. For each question various data would be gathered to provide ‘evidence for validity’ and to identify any ‘threats to validity’. The structure is designed to be accessible for operational users. This paper describes the development of the proposed framework and the types of methods to be used to gather relevant evidence.


Research in Post-compulsory Education | 2009

Are all assessments equal? The comparability of demands of college‐based assessments in a vocationally related qualification

Victoria Crisp; Nadežda Novaković

The consistency of assessment demands is important to validity. This research investigated the comparability of the demands of college‐assessed units within a vocationally related qualification, drawing on methodological approaches that have previously been used to compare assessments. Assessment materials from five colleges were obtained. After some initial familiarisation and the revision of a framework of demands, 15 expert judges compared the assessment materials from pairs of colleges in turn. For each pair, each judge evaluated which college’s assessment tasks they considered more demanding on a number of dimensions and overall. Analyses suggested that assessments were more demanding at some colleges than others but that differences were not large. An interview study was also conducted in four colleges. The interview data suggested substantial similarities between colleges in the assessment tasks conducted. However, some differences that could affect demands were identified (e.g., degree of authenticity of tasks, availability of exemplars when writing reports).


Research Papers in Education | 2008

Improving students' capacity to show their knowledge, understanding and skills in exams by using combined question and answer papers

Victoria Crisp

This research set out to compare the quality, length and nature of (1) exam responses in combined question and answer booklets, with (2) responses in separate answer booklets in order to inform choices about response format. Combined booklets are thought to support candidates by giving more information on what is expected of them. Anecdotal evidence suggests that the use of combined booklets may encourage students to attempt an answer rather than write nothing. However, candidates may be wasting time if they write more in order to fill the space but this extra response is not worth extra credit. Questions from a geography AS‐level past paper were arranged to form two subtests. Over 400 students each attempted one part of the test in a combined paper and one part in a separate booklet. Six students were interviewed after taking the test. Four examiners marked the scripts and completed coding sheets to record the length of responses and which of a number of options about the nature of the responses applied (e.g. Not enough depth, Evidence of ‘gap‐filling’/irrelevance). On both parts of the test, the mean total scores were found to be significantly higher with the combined paper than the separate booklet. The combined format often prompted longer answers and for most items elicited many more full‐length answers. The combined format tended to encourage students to show their depth of knowledge and understanding, elicit fewer incidences of irrelevance (or gap‐filling), reduce the frequency of good answers that were not concise and increase the occurrence of full, concise responses. The combined format gave students better information on what was needed and encouraged them to show their knowledge, understanding and skills more fully. This confirms the view that this format is preferable for exam papers requiring short to medium‐length responses.


British Educational Research Journal | 2012

The effects of features of examination questions on the performance of students with dyslexia

Victoria Crisp; Martin H. Johnson; Nadežda Novaković

This research investigated whether features of examination questions influence students with dyslexia differently to others, potentially affecting whether they have a fair opportunity to show their knowledge, understanding and skills. A number of science examination questions were chosen. For some questions two slightly different versions were created. A total of 54 students considered by their teachers to have dyslexia and a matched control group of 51 students took the test under exam conditions. A dyslexia screening assessment was administered where possible and some students were interviewed. Facility values and Rasch analysis were used to compare performance between the versions of the same question and between those with and without dyslexia. Chi-square statistics found no statistically significant differences in performance between groups or between question versions. However, some tentative implications for good practice can be inferred (e.g. avoiding ambiguous pronouns, using bullet points).


Irish Educational Studies | 2011

Exploring Features That Affect the Difficulty and Functioning of Science Exam Questions for Those with Reading Difficulties.

Victoria Crisp

This research explored the measurement characteristics of two science examinations and the potential to use access arrangements data to investigate how students requiring reading support are affected by features of exam questions. For two science examinations, traditional and Rasch analyses provided estimates of difficulty and information on item functioning. For one examination, the performance of students eligible for support from a reader in exams was compared to a ‘norm’ group. For selected items a sample of student responses were analysed. A number of factors potentially making questions easier, more difficult or potentially contributing to problems with item functioning were identified. A number of features that may particularly influence those requiring reading support were also identified.

Collaboration


Dive into the Victoria Crisp's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ayesha Ahmed

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sylvia Green

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stuart Shaw

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nat Johnson

University of Cambridge

View shared research outputs
Researchain Logo
Decentralizing Knowledge