Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Luke Harding is active.

Publication


Featured researches published by Luke Harding.


Language Testing | 2012

Accent, listening assessment and the potential for a shared-L1 advantage: A DIF perspective:

Luke Harding

This paper reports on an investigation of the potential for a shared-L1 advantage on an academic English listening test featuring speakers with L2 accents. Two hundred and twelve second-language listeners (including 70 Mandarin Chinese L1 listeners and 60 Japanese L1 listeners) completed three versions of the University Test of English as a Second Language (UTESL) listening sub-test which featured an Australian English-accented speaker, a Japanese-accented speaker and a Mandarin Chinese-accented speaker. Differential item functioning (DIF) analyses were conducted on data from the tests which featured L2-accented speakers using two methods of DIF detection – the standardization procedure and the Mantel-Haenszel procedure – with candidates matched for ability on the test featuring the Australian English-accented speaker. Findings showed that Japanese L1 listeners were advantaged on a small number of items on the test featuring the Japanese-accented speaker, but these were balanced by items which favoured non-Japanese L1 listeners. By contrast, Mandarin Chinese L1 listeners were clearly advantaged across several items on the test featuring a Mandarin Chinese L1 speaker. The implications of these findings for claims of bias are discussed with reference to the role of speaker accent in the listening construct.


Language Assessment Quarterly | 2014

Communicative Language Testing: Current Issues and Future Research

Luke Harding

This article discusses a range of current issues and future research possibilities in Communicative Language Testing (CLT) using, as its departure point, the key questions which emerged during the CLT symposium at the 2010 Language Testing Forum. The article begins with a summary of the 2010 symposium discussion in which three main issues related to CLT are identified: (a) the “mainstreaming” of CLT since 1980, (b) the difficulty for practitioners in utilising and operationalising models of communicative ability, and (c) the challenge of theorising a sufficiently rich communicative construct. These issues are each discussed and elaborated in turn, with the conclusion drawn that, whereas the communicative approach lies dormant in many test constructs, there is scope for a reinvigorated communicative approach that focuses on “adaptability.” A number of future research directions with adaptability at the forefront are proposed.


Language Testing | 2013

Defining the language assessment literacy gap: evidence from a parliamentary inquiry

John Pill; Luke Harding

This study identifies a unique context for exploring lay understandings of language testing and, by extension, for characterizing the nature of language assessment literacy among non-practitioners, stemming from data in an inquiry into the registration processes and support for overseas trained doctors by the Australian House of Representatives Standing Committee on Health and Ageing. The data come from Hansard transcripts of public hearings of the inquiry. Sections of the data related to language and language testing (as part of the current registration process for doctors seeking employment in Australia) were identified and coded using a thematic analysis. Findings reveal misconceptions about who is responsible for tests and for decisions based on scores in this context, as well as misconceptions about language testing procedures. Issues also emerge concerning the location of expertise in language and language testing. Discussion of these findings contributes to current debate within the language testing community (e.g., Taylor, 2009) about where responsibility lies for increasing language assessment literacy among non-practitioner stakeholders and how this might best be achieved.


Language Testing | 2015

Diagnostic assessment of reading and listening in a second or foreign language: Elaborating on diagnostic principles

Luke Harding; J. Charles Alderson; Tineke Brunfaut

Alderson, Brunfaut and Harding (2014) recently investigated how diagnosis is practised across a range of professions in order to develop a tentative framework for a theory of diagnosis in second or foreign language (SFL) assessment. In articulating this framework, a set of five broad principles were proposed, encompassing the entire enterprise of diagnostic assessment. However, there remain questions about how best to implement these principles in practice, particularly in identifying learners’ strengths and weaknesses in the less well-documented areas of SFL reading and listening. In this paper, we elaborate on the set of principles by first outlining the stages of a diagnostic process built on these principles, and then discussing the implications of this process for the diagnostic assessment of reading and listening. In doing so, we will not only elaborate on the theory of diagnosis with respect to its application in the assessment of these skills, but also discuss the ways in which each construct might be defined and operationalized for diagnostic purposes.


Language Testing | 2009

Test Review: Review of the Certificate of Proficiency in English (CPE) Speaking Test.

Susy Macqueen; Luke Harding

In 2002 the University of Cambridge Local Examinations Syndicate (UCLES) implemented a revised version of the Certificate of Proficiency in English (CPE). CPE, which is the highest level of the Main Suite of Cambridge ESOL exams, comprises five modules, Reading, Writing, Use of English, Listening and Speaking, the latter of which is the focus of this review. Among the innovations introduced in the revised CPE exam were the introduction of a paired speaking format with two candidates and two examiners, revised tasks including a collaborative task carried out by the candidates together, a script for the participating examiner to follow and revised assessment scales.


Language Assessment Quarterly | 2011

Assessor decision-making while marking a note-taking listening test: The case of the OET

Luke Harding; John Pill; Kerry Ryan

This article investigates assessor decision making when using and applying a marking guide for a note-taking task in a specific purpose English language listening test. In contexts where note-taking items are used, a marking guide is intended to stipulate what kind of response should be accepted as evidence of the ability under test. However, there remains some scope for assessors to apply their own interpretations of the construct in judging responses that fall outside the information provided in a marking guide. From a content analysis of data collected in a stimulated recall group discussion, a taxonomy of the types of decisions made by assessors is derived and the bases on which assessors make such decisions are discussed. The present study is therefore a departure point for further investigations into how assessor decision-making processes while marking open-ended items might be improved.


Assessment in Education: Principles, Policy & Practice | 2017

Bridging Assessment and Learning: A View from Second and Foreign Language Assessment.

J. Charles Alderson; Tineke Brunfaut; Luke Harding

This paper considers issues around the relationship between assessment and learning, as put forward by Baird, Andrich, Hopfenbeck and Stobart (2017), from the perspective of the field of second and foreign language assessment. In our response, we describe shared observations on the nature of research and practice in general educational assessment and in language assessment (including with respect to linking assessment with theories of learning, managing impact, and enhancing assessment literacy). At the same time, we also identify areas where language assessment seems to diverge from current research and practice in general educational assessment (for example in the areas of assessment purposes, construct definitions, and validation theory and practice). As a consequence, we believe that close monitoring of advances in both fields is likely to be mutually beneficial.


Archive | 2018

Teachers Setting the Assessment (Literacy) Agenda: A Case Study of a Teacher-Led National Test Development Project in Luxembourg

Tineke Brunfaut; Luke Harding

In 2011, a team of teachers in Luxembourg developed a proposal to reform their national English language examinations. The teachers sought advice from external language testing consultants and expressed a wish not only to transform their existing examinations, but also to develop their own capacity to construct and evaluate language assessments in line with the principles of good test design. What transpired was a unique, phased language assessment literacy training-development cycle which has led to the successful development of a lower-stakes national test, and the planning stage for a High-Stakes end-of-secondary school exam – the key objective of the project. In this chapter we will provide a narrative account of the reform project in Luxembourg, presenting this context as an illustrative case of teacher-led exam reform in a High-Stakes context, and discussing the procedures and challenges encountered between 2011 and 2015. Based on our experiences as consultants on this project, we make some recommendations at the end of the chapter for those working with a similar remit for training and advising on teacher-led exam reform.


Language Assessment Quarterly | 2014

Issues in Language Testing Revisited

J. Charles Alderson; Tineke Brunfaut; Luke Harding

In the late 1970s the language testing scene in North America was very lively, with the work, especially, of John Oller gaining prominence and the first meetings in Florida and Boston that led to the creation of the Language Testing Research Colloquium. In the United Kingdom, however, not very much seemed to be happening, at least in terms of conferences, discussion groups, professional journals, or conference proceedings. This was in contrast with what was happening elsewhere in Europe, where Chris KleinBraley, Doug Stephenson, and others had formed the Interuniversitäre Sprachtestgruppe and had held meetings in Hurth, Tampere, Darmstadt, and Duisburg and published conference proceedings. It was therefore suggested that a small group of people involved in language testing in the United Kingdom should get together and discuss recent developments in our field. As a result, in the autumn of 1980 a Forum was organised at Lancaster University. Alan Davies and Clive Criper from Edinburgh, Cyril Weir from the Associated Examining Board, Don Porter and Arthur Hughes from the University of Reading, Alan Moller from the British Council, and Caroline Clapham and Charles Alderson from Lancaster University gathered in a small room on a sunny autumn weekend at Lancaster and discussed recent publications that were thought to be in some sense representative of the “state of the art” in, and influential on, (British) language testing. These were


Australian Review of Applied Linguistics | 2008

Language testing and English as an international language

Catherine Elder; Luke Harding

Collaboration


Dive into the Luke Harding's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John Pill

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kerry Ryan

Swinburne University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge