Tineke Brunfaut
Lancaster University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tineke Brunfaut.
Studies in Second Language Acquisition | 2013
Andrea Révész; Tineke Brunfaut
This study investigated the effects of a group of task factors on advanced English as a second language learners’ actual and perceived listening performance. We examined whether the speed, linguistic complexity, and explicitness of the listening text along with characteristics of the text necessary for task completion influenced comprehension. We also explored learners’ perceptions of what textual factors cause difficulty. The 68 participants performed 18 versions of a listening task, and each task was followed by a perception questionnaire. Nine additional students engaged in stimulated recall. The listening texts were analyzed in terms of a variety of measures, utilizing automatized analytical tools. We used Rasch and regression analyses to estimate task difficulty and its relationship to the text characteristics. Six measures emerged as significant predictors of task difficulty, including indicators of (a) lexical range, density, and diversity and (b) causal content. The stimulated recall comments were more reflective of these findings than the questionnaire responses.
Language Testing | 2015
Luke Harding; J. Charles Alderson; Tineke Brunfaut
Alderson, Brunfaut and Harding (2014) recently investigated how diagnosis is practised across a range of professions in order to develop a tentative framework for a theory of diagnosis in second or foreign language (SFL) assessment. In articulating this framework, a set of five broad principles were proposed, encompassing the entire enterprise of diagnostic assessment. However, there remain questions about how best to implement these principles in practice, particularly in identifying learners’ strengths and weaknesses in the less well-documented areas of SFL reading and listening. In this paper, we elaborate on the set of principles by first outlining the stages of a diagnostic process built on these principles, and then discussing the implications of this process for the diagnostic assessment of reading and listening. In doing so, we will not only elaborate on the theory of diagnosis with respect to its application in the assessment of these skills, but also discuss the ways in which each construct might be defined and operationalized for diagnostic purposes.
Language Testing | 2018
Gareth McCray; Tineke Brunfaut
This study investigates test-takers’ processing while completing banked gap-fill tasks, designed to test reading proficiency, in order to test theoretically based expectations about the variation in cognitive processes of test-takers across levels of performance. Twenty-eight test-takers’ eye traces on 24 banked gap-fill items (on six tasks) were analysed according to seven online eye-tracking measures representing overall, text and task processing. Variation in processing was related to test-takers’ level of performance on the tasks overall. In particular, as hypothesized, lower-scoring students exerted more cognitive effort on local reading and lower-level cognitive processing in contrast to test-takers who attained higher scores. The findings of different cognitive processes associated with variation in scores illuminate the construct measured by banked gap-fill items, and therefore have implications for test design and the validity of score interpretations.
Language Assessment Quarterly | 2014
Tineke Brunfaut
Professor J. Charles Alderson grew up in the town of Burnley, in the North-West of England and is still based in the North West, but in the ancient city of Lancaster. From Burnley to Lancaster, however, lies a journey and a career which took him all around the world to share his knowledge, skills, and experience in language testing, and to learn from others on language test development and research projects. Charles has worked with, and advised language testing teams, in countries as diverse as Austria, Brazil, the Baltic States, China, Ethiopia, Finland, Hungary, Hong Kong, Malaysia, Slovenia, Spain, Sri Lanka, Tanzania and the United Kingdom – to name just a few. He has been a consultant to, for example, the British Council, the Council of Europe (CoE), the European Commission, the International Civil Aviation Organization (ICAO), the UK’s Department for International Development (DfID – formerly known as the Overseas Development Agency, or ODA) and the Programme for International Student Assessment (PISA) of the Organisation for Economic Cooperation and Development (OECD). For over 30 years, however, Lancaster University in the UK has been his home institution, where he has taught courses on, for example, language assessment, language acquisition, curriculum design, research methodology, statistics, and applied linguistics. The list of post-graduate students supervised by Charles Alderson is
Assessment in Education: Principles, Policy & Practice | 2017
J. Charles Alderson; Tineke Brunfaut; Luke Harding
This paper considers issues around the relationship between assessment and learning, as put forward by Baird, Andrich, Hopfenbeck and Stobart (2017), from the perspective of the field of second and foreign language assessment. In our response, we describe shared observations on the nature of research and practice in general educational assessment and in language assessment (including with respect to linking assessment with theories of learning, managing impact, and enhancing assessment literacy). At the same time, we also identify areas where language assessment seems to diverge from current research and practice in general educational assessment (for example in the areas of assessment purposes, construct definitions, and validation theory and practice). As a consequence, we believe that close monitoring of advances in both fields is likely to be mutually beneficial.
Language Assessment Quarterly | 2014
Tineke Brunfaut
This article presents a number of issues on the topic of Language for Specific Purposes (LSP) testing that were raised during a plenary discussion at the 30th annual Language Testing Forum. The comments particularly focused on (a) past and current conceptualizations and categorizations of LSP tests, (b) tensions between specificity and practicality in LSP test design, and (c) the role of locality in LSP testing. The views exchanged on each of these themes are reported and considered in light of current research and debates. Suggestions are made for future research in the area of LSP testing.
Archive | 2018
Tineke Brunfaut; Luke Harding
In 2011, a team of teachers in Luxembourg developed a proposal to reform their national English language examinations. The teachers sought advice from external language testing consultants and expressed a wish not only to transform their existing examinations, but also to develop their own capacity to construct and evaluate language assessments in line with the principles of good test design. What transpired was a unique, phased language assessment literacy training-development cycle which has led to the successful development of a lower-stakes national test, and the planning stage for a High-Stakes end-of-secondary school exam – the key objective of the project. In this chapter we will provide a narrative account of the reform project in Luxembourg, presenting this context as an illustrative case of teacher-led exam reform in a High-Stakes context, and discussing the procedures and challenges encountered between 2011 and 2015. Based on our experiences as consultants on this project, we make some recommendations at the end of the chapter for those working with a similar remit for training and advising on teacher-led exam reform.
Language Assessment Quarterly | 2016
Eve Ryan; Tineke Brunfaut
ABSTRACT It is not unusual for tests in less-commonly taught languages (LCTLs) to be developed by an experienced item writer with no proficiency in the language being tested, in collaboration with a language informant who is a speaker of the target language, but lacks language assessment expertise. How this approach to item writing works in practice, and what factors play a role in it, is largely unrecorded, as are item-writing processes and practices in language assessment in general. Through a case study approach, this study sought to gain insights into test development practices in cases when essential item writer traits are spread across different people. Seven in-depth interviews with language assessment specialists and language informants involved in LCTL reading test development revealed a number of specific characteristics, and also challenges, to test developer recruitment and test development in this context. Findings indicate that this inherently collaborative approach brings with it a sophisticated system of “checks and balances” that may benefit item writing in some respects.
Language Assessment Quarterly | 2014
J. Charles Alderson; Tineke Brunfaut; Luke Harding
In the late 1970s the language testing scene in North America was very lively, with the work, especially, of John Oller gaining prominence and the first meetings in Florida and Boston that led to the creation of the Language Testing Research Colloquium. In the United Kingdom, however, not very much seemed to be happening, at least in terms of conferences, discussion groups, professional journals, or conference proceedings. This was in contrast with what was happening elsewhere in Europe, where Chris KleinBraley, Doug Stephenson, and others had formed the Interuniversitäre Sprachtestgruppe and had held meetings in Hurth, Tampere, Darmstadt, and Duisburg and published conference proceedings. It was therefore suggested that a small group of people involved in language testing in the United Kingdom should get together and discuss recent developments in our field. As a result, in the autumn of 1980 a Forum was organised at Lancaster University. Alan Davies and Clive Criper from Edinburgh, Cyril Weir from the Associated Examining Board, Don Porter and Arthur Hughes from the University of Reading, Alan Moller from the British Council, and Caroline Clapham and Charles Alderson from Lancaster University gathered in a small room on a sunny autumn weekend at Lancaster and discussed recent publications that were thought to be in some sense representative of the “state of the art” in, and influential on, (British) language testing. These were
Applied Linguistics | 2015
J. Charles Alderson; Tineke Brunfaut; Luke Harding