Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Annie Brown is active.

Publication


Featured researches published by Annie Brown.


Language Testing | 2003

Interviewer variation and the co-construction of speaking proficiency

Annie Brown

Whilst claims to validity for conversational oral interviews as measures of nontest conversational skills are based largely on the unpredictable or impromptu nature of the test interaction, ironically this very feature is also likely to lead to a lack of standardisation across interviews, and hence potential unfairness. This article addresses the question of variation amongst interviewers in the ways they elicit demonstrations of communicative ability and the impact of this variation on candidate performance and, hence, raters’ perceptions of candidate ability. Through a discourse analysis of two interviews involving the same candidate with two different interviewers, it illustrates how intimately the interviewer is implicated in the construction of candidate proficiency. The interviewers differed with respect to the ways in which they structured sequences of topical talk, their questioning techniques, and the type of feedback they provided. An analysis of verbal reports produced by some of the raters confirmed that these differences resulted in different impressions of the candidate’s ability: in one interview the candidate was considered to be more ‘effective’ and ‘willing’ as a communicator than in the other. The paper concludes with a discussion of the implications for rater training and test design.


Language Testing | 2009

Assessing paired orals: raters' orientation to interaction

Ana Maria Ducasse; Annie Brown

Speaking tasks involving peer-to-peer candidate interaction are increasingly being incorporated into language proficiency assessments, in both large-scale international testing contexts, and in smaller-scale, for example course-related, ones. This growth in the popularity and use of paired and group orals has stimulated research, particularly into the types of discourse produced and the possible impact of candidate background factors on performance. However, despite the fact that the strongest argument for the validity of peer-to-peer assessment lies in the claim that such tasks allow for the assessment of a broader range of interactional skills than the more traditional interview-format tests do, there is surprisingly little research into the judgments that are made of such performances. The fact that raters, and rating criteria, are in a crucial mediating position between output and outcomes, warrants investigation into how raters construe the interaction in these tasks. Such investigations have the potential to inform the development of interaction-based rating scales and ensure that validity claims are moved beyond the content level to the construct level. This paper reports the findings of a verbal protocol study of teacher-raters viewing the paired test discourse of 17 beginner dyads in a university-based Spanish as a foreign language course. The findings indicate that the raters identified three interaction parameters: non-verbal interpersonal communication, interactive listening, and interactional management. The findings have implications for our understanding of the construct of effective interaction in paired candidate speaking tests, and for the development of appropriate rating scales.


Language Testing | 1993

The role of test-taker feedback in the test development process: test-takers' reactions to a tape-mediated test of proficiency in spoken Japanese

Annie Brown

This study explores how test-taker reactions in a specific purpose testing context may vary according to characteristics of the test-taker. Such reactions are of interest both theoretically to the researcher and practically to the test developer, who is concerned to ensure that the test is fair and appropriate for all candidates and acceptable to the range of test-takers. The reactions may also be of use in the improvement of test items (as a supplement to item analysis) and test rubrics. The article reports on the use of test-taker feedback in the development of the occupational foreign language test (Japanese), a tape-mediated test of spoken Japanese for the tourism and hospitality industry. Some 53 trial subjects completed a post-test questionnaire, providing reactions to the test as a whole, to task types, and to individual test items. Relationships are investigated between responses and a number of test-taker characteristics, including gender, type of course undertaken (general or specific purpose), amount of study of the language, time spent in Japan and relevant occupa tional experience. Responses are also considered for items and person shown by Rasch IRT analysis as indicating significant misfit. Aspects of the content, construct and face validity of the test are considered in the light of the analysis. The role of feedback in the revision of test items, in the writing of test rubrics and in the development of the test-user handbook is also discussed.


Language Testing | 1994

Book review : Hamp-Lyons, L., editor 1991: Assessing second language writing in academic contexts. Norwood, NJ: Ablex

Annie Brown; Tom Lumley

In this book, Liz Hamp-Lyons presents a collection of articles addressing issues in the assessment of ESL writing in academic settings, particularly those concerned with the question of validity. Some articles present basic issues and are clearly intended for people with little familiarity with testing theory. Others, however, presuppose a greater breadth of experience with the concerns of language testing research. Following Hamp-Lyons’s useful overview of basic concepts and trends in language testing research (reliability, different types of validity, washback), the book is divided into six parts, dealing with the writer, the task, the reader, relating the assessment to the academic community, scoring and feedback, and accountability. In Part I (’The writer’), the first two articles (Ballard and Clanchy; Basham and Kwachka) focus on the issue of attitudes to knowledge and rhetorical styles. They both point out the need for greater tolerance of different cultural approaches to academic writing. However, this raises the question of whether it is feasible for the ESL practitioner, let alone the subject lecturer, to have sufficient breadth of knowledge to be familiar with the cultures of students from a wide range of backgrounds. And, even if it


Applied Linguistics | 2007

Assessed levels of second language speaking proficiency: How distinct?

Noriko Iwashita; Annie Brown; Tim McNamara; Sally Roisin O'Hagan


Archive | 1999

Dictionary of language testing

Alan Davies; Annie Brown; Cathie Elder; Kathryn Hill; Tom Lumley; Tim McNamara


ETS Research Report Series | 2005

AN EXAMINATION OF RATER ORIENTATIONS AND TEST-TAKER PERFORMANCE ON ENGLISH-FOR-ACADEMIC-PURPOSES SPEAKING TASKS

Annie Brown; Noriko Iwashita; Tim McNamara


International English Language Testing System (IELTS) Research Reports 2000: Volume 3 | 2000

An investigation of the rating process in the IELTS oral interview

Annie Brown


International English Language Testing System (IELTS) Research Reports 1998: Volume 1 | 1998

Interviewer style and candidate performance in the IELTS oral interview

Annie Brown; Kathryn Hill


Archive | 2005

Interviewer variability in oral proficiency interviews

Annie Brown

Collaboration


Dive into the Annie Brown's collaboration.

Top Co-Authors

Avatar

Kathryn Hill

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar

Tom Lumley

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tim McNamara

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar

Tim McNamara

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alan Davies

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sally Roisin O'Hagan

Ministry of Higher Education and Scientific Research

View shared research outputs
Researchain Logo
Decentralizing Knowledge