Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Catherine Elder is active.

Publication


Featured researches published by Catherine Elder.


Language Learning | 2001

Can We Predict Task Difficulty in an Oral Proficiency Test? Exploring the Potential of an Information-Processing Approach to Task Design.

Noriko Iwashita; Tim McNamara; Catherine Elder

This study addresses the following question: Are different task characteristics and performance conditions (involving assumed different levels of cognitive demand) associated with different levels of fluency, complexity, or accuracy in test candidate responses? The materials for the were a series of narrative tasks involving a picture stimulus; the participants were 193 pre-university students taking English courses. We varied the conditions for tasks in each dimension and measured the impact of these factors on task performance with both familiar detailed discourse measures and specially constructed rating scales, analyzed using Rasch methods. We found that task performance conditions in each dimension failed to influence task difficulty and task performance as expected. We discuss implications for the design of speaking assessments and broader research.


Canadian Modern Language Review-revue Canadienne Des Langues Vivantes | 2009

Implicit and Explicit Knowledge in Second Language Learning, Testing and Teaching

Rod Ellis; Shawn Loewen; Catherine Elder; Rosemary Erlam; Jenefer Philp; Hayo Reinders

Part 1: Introduction Chapter 1 Implicit and explicit learning, knowledge and instruction - Rod EllisPart 2: The measurement of implicit and explicit knowledge Chapter 2 Defining and measuring implicit and explicit knowledge of a second language - Rod Ellis Chapter 3 Elicited oral imitation as a measure of implicit knowledge - Rosemary Erlam Chapter 4 Grammaticality judgement tests and the measurement of implicit and explicit l 2 knowledge - Shawn Loewen Chapter 5 Validating a metalinguistic test - Cathie ElderPart 3: Applying the measures of implicit and explicit knowledge Chapter 6 Investigating learning difficulty as implicit and explicit knowledge - Rod Ellis Chapter 7 Implicit and explicit knowledge of an l 2 and language proficiency - Cathie Elder Chapter 8 Pathways to proficiency: Learning experiences and attainment in implicit and explicit knowledge of English as a second language - Jenefer Philp Chapter 9 Exploring the metalinguistic knowledge of teacher trainees - Rosemary Erlam, Jenefer Philp, and Cathie ElderPart 4: Form-focused instruction and the acquisition of implicit and explicit knowledge Chapter 10 The roles of output-based and input-based instruction in the acquisition of l 2 implicit and explicit knowledge - Rosemary Erlam, Shawn Loewen and Jenefer Philp Chapter 11 The incidental acquisition of 3rd person -s as l 2 implicit and explicit knowledge - Shawn Loewen, Rosemary Erlam and Rod Ellis Chapter 12 The effects of two types of input on the acquisition of l 2 implicit and explicit knowledge - Hayo Reinders and Rod Ellis Chapter 13 Implicit and explicit corrective feedback and the acquisition of l 2 Grammar - Rod Ellis, Shawn Loewen and Rosemary ErlamPart 5: Conclusion Chapter 14 Retrospect and prospect - Rod Ellis


Health Promotion International | 2009

Up to a quarter of the Australian population may have suboptimal health literacy depending upon the measurement tool: results from a population-based survey

Melissa N. Barber; Margaret Staples; Richard H. Osborne; Rosemary Clerehan; Catherine Elder; Rachelle Buchbinder

The objective of this paper is to measure health literacy in a representative sample of the Australian general population using three health literacy tools; to consider the congruency of results; and to determine whether these assessments were associated with socio-demographic characteristics. Face-to-face interviews were conducted in a stratified random sample of the adult Victorian population identified from the 2004 Australian Government Electoral Roll. Participants were invited to participate by mail and follow-up telephone call. Health literacy was measured using the Rapid Estimate of Adult Literacy in Medicine (REALM), Test of Functional Health Literacy in Adults (TOFHLA) and Newest Vital Sign (NVS). Of 1680 people invited to participate, 89 (5.3%) were ineligible, 750 (44.6%) were not contactable by phone, 531 (32%) refused and 310 (response rate 310/1591, 19.5%) agreed to participate. Compared with the general population, participants were slightly older, better educated and had a higher annual income. The proportion of participants with less than adequate health literacy levels varied: 26.0% (80/308) for the NVS, 10.6% (51 33/310) for the REALM and 6.8% (21/309) for the TOFHLA. A varying but significant proportion of the general population was found to have limited health literacy. The health literacy measures we used, while moderately correlated, appear to measure different but related constructs and use different cut offs to indicate poor health literacy.


ACM Sigapl Apl Quote Quad | 2006

ASSESSING ENGLISH AS A LINGUA FRANCA

Catherine Elder; Alan Davies

This chapter proposes two alternative models for assessing English as a Lingua Franca (ELF). Tests based on the first model resemble existing approaches to assessing English as a foreign language offered by such tests as TOEFL, and IELTS. This model assumes that interlocutors use varieties of English based on Standard English. What distinguishes tests of this model from existing international tests of English is that it explicitly allows test accommodations. Such accommodations modify the test delivery system in order to make it accessible and fair for ELF users without changing the construct. Tests based on the second model assume that ELF may be regarded not as a use of Standard English but as a code in its own right. Similarities to varieties of World Englishes such as Singapore English, Indian English are noted. In tests based on the second model, strategic competence takes precedence over linguistic accuracy. Although both models are somewhat problematic in practice, neither, it is argued, entails any radical reconceptualization of language testing beyond what has already been envisaged and/or enacted in the field. Nevertheless, future tests of ELF may have both symbolic and practical importance, giving greater authority and legitimacy to expanding and outer circle English voices on the one hand and giving flesh to definitions of effective intercultural communication on the other. The chapter concludes by cautioning against moving too quickly to assess ELF before it has been properly described.


Language Teaching Research | 2005

Language choices and pedagogic functions in the foreign language classroom: a cross-linguistic functional analysis of teacher talk:

Sun Hee Ok Kim; Catherine Elder

This article examines the language choices made by native-speaker teachers of Japanese, Korean, German and French in foreign language (FL) classrooms in New Zealand secondary schools. It explores these teachers’ patterns of alternation between English, the majority language, and the TL, using both AS-units (Analysis of Speech units), devised by Foster et al. (2000) and a multiple-category coding system entitled ‘Functional Language Alternation Analysis of Teacher Talk’ (FLAATT), developed expressly to allow a cross-linguistic comparison of the relationship between teachers’ language choices and particular pedagogic functions. Findings suggest that the participating teachers differed markedly from one another not only in the amount of TL used but also in the pedagogic functions they used most frequently and in the language (TL or English) they chose for these functions. There was a tendency by most teachers to avoid complex interactions in the TL, limiting the potential for intake and for real communication on the part of the students. Implications are drawn for research and for teacher education.


Language Testing | 2002

Estimating the Difficulty of Oral Proficiency Tasks: What Does the Test-Taker Have To Offer?.

Catherine Elder; Noriko Iwashita; Tim McNamara

This study investigates the impact of performance conditions on perceptions of task difficulty in a test of spoken language, in light of the cognitive complexity framework proposed by Skehan (1998). Candidates performed a series of narrative tasks whose characteristics, and the conditions under which they were performed, were manipulated, and the impact of these on task performance was analysed. Test-takers recorded their perceptions of the relative difficulty of each task and their attitudes to them. Results offered little support for Skehan’s framework in the context of oral proficiency assessment, and also raise doubts about post hoc estimates of task difficulty by test-takers.


Language Testing | 2001

Assessing the language proficiency of teachers: are there any border controls?

Catherine Elder

This article takes up some of the issues identified by Douglas (2000) as problematic for Language for Specific Purposes (LSP) testing, making reference to a number of performance-based instruments designed to assess the language proficiency of teachers or intending teachers. The instruments referred to include proficiency tests for teachers of Italian as a foreign language in Australia (Elder, 1994) and for trainee teachers using a foreign language (in this case English) as medium for teaching school subjects such as mathematics and science in Australian secondary schools (Elder, 1993b; Viete, 1998). The first problem addressed in the article has to do with specificity: how does one define the domain of teacher proficiency and is it distinguishable from other areas of professional competence or, indeed, from what is often referred to as ‘general’ language proficiency? The second problem has to do with the vexed issue of authenticity: what constitutes appropriate task design on a teacher-specific instrument and to what extent can ‘teacher-like’ language be elicited from candidates in the very artificial environment of a test? The third issue pertains to the role of nonlanguage factors (such as strategic competence or teaching skills) which may affect a candidate’s response to any appropriately contextualized test-task and whether these factors can or should be assessed independently of the purely linguistic qualities of the test performance. All of these problems are about blurred boundaries, between and within real world domains of language use, between the test and the nontest situation, and between the components of ability or knowledge measured by the test. It is argued that these blurred boundaries are an indication of the indeterminacy of LSP, as currently conceptualized, as an approach to test development.


Language Testing | 2007

Evaluating rater responses to an online training program for L2 writing assessment

Catherine Elder; Gary Barkhuizen; Ute Knoch; Janet von Randow

The use of online rater self-training is growing in popularity and has obvious practical benefits, facilitating access to training materials and rating samples and allowing raters to reorient themselves to the rating scale and self monitor their behaviour at their own convenience. However there has thus far been little research into rater attitudes to training via this modality and its effectiveness in enhancing levels of inter- and intra-rater agreement. The current study explores these issues in relation to an analytically-scored academic writing task designed to diagnose undergraduates’ English learning needs. 8 ESL raters scored a number of pre-rated benchmark writing samples online and received immediate feedback in the form of a discrepancy score indicating the gap between their own rating of the various categories of the rating scale and the official ratings assigned to the benchmark writing samples. A batch of writing samples was rated twice (before and after participating in the online training) by each rater and Multifaceted Rasch analyses were used to compare levels of rater agreement and rater bias (on each analytic rating category). Raters’ views regarding the effectiveness of the training were also canvassed. While findings revealed limited overall gains in reliability, there was considerable individual variation in receptiveness to the training input. The paper concludes with suggestions for refining the online training program and for further research into factors influencing rater responsiveness.


Language Testing | 2011

Judgments of oral proficiency by non-native and native English speaking teacher raters: Competing or complementary constructs?

Ying Zhang; Catherine Elder

This paper reports the findings of an empirical study on ESL/EFL teachers’ evaluation and interpretation of oral English proficiency as elicited by the national College English Test-Spoken English Test (CET-SET) of China. Informed by debates on the issue of native speaker (NS) norms which have become the focus of attention in recent years, this study addresses the question of whether judgments of language proficiency by non-native English speaking (NNES) teachers, who are currently used to assess performance on the CET-SET, correspond to those of native English speaking (NES) teachers or whether the two groups draw on different constructs of oral proficiency. Data for the study were derived from two sources: unguided holistic ratings given by a group of 19 NES and 20 NNES teachers to CET-SET speech samples from 30 test-takers, and written comments to justify the ratings assigned. Results yielded by both quantitative (MFRM) and qualitative analyses of teacher data, revealed no significant difference in raters’ holistic judgments of the speech samples and a broad level of agreement between groups on the construct components of oral English proficiency. However, the analysis of raters’ comments revealed both quantitative and qualitative differences in the way NES and NNES teachers weighed various features of the oral proficiency construct in justifying the decisions made. The paper concludes by considering the implications of the study’s findings for debates about the native speaker norm as the target for language learners and test-takers.


Language Testing | 2012

Investigating the validity of an integrated listening-speaking task: A discourse-based analysis of test takers’ oral performances

Kellie Frost; Catherine Elder; Gillian Wigglesworth

Performance on integrated tasks requires candidates to engage skills and strategies beyond language proficiency alone, in ways that can be difficult to define and measure for testing purposes. While it has been widely recognized that stimulus materials impact test performance, our understanding of the way in which test takers make use of these materials in their responses, particularly in the context of listening-speaking tasks, remains predominantly intuitive. Recent studies have highlighted the problems associated with content-related aspects of task fulfilment on integrated tasks, but little attempt has been made to operationalize the way in which content from the input material is integrated into speaking performances. Using discourse data from a trial administration of a pilot for an Oxford English language test, this paper investigates how test takers integrate stimulus materials into their speaking performances on an integrated listening-then-speaking summary task, whether these behaviours are reflected in the relevant rating scale and, by implication, whether the test scores assigned according to this scale reflect real differences in the quality of oral performances. An innovative discourse analytic approach was developed to analyse content-related aspects of performance in order to determine if such aspects represent an appropriate measure of the speaking ability construct. Results showed that the measures devised, such as the number of key points included from the input text, and the accuracy with which information was reproduced or reformulated, effectively distinguished participants according to their level of speaking proficiency. The study’s findings support the use of this particular task-type and the appropriateness of the associated rating scale as a measure of speaking proficiency, as well as the utility of the devised discourse-based measures for the validation of integrated tasks in other assessment contexts.

Collaboration


Dive into the Catherine Elder's collaboration.

Top Co-Authors

Avatar

Tim McNamara

Ministry of Higher Education and Scientific Research

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John Pill

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alan Davies

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gillian Webb

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar

Hyejeong Kim

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge